Hack a Day

Subscribe to Hack a Day feed Hack a Day
Fresh hacks every day
Updated: 3 min 31 sec ago

A YouTube Subscriber Counter With A Tetris Twist

2 hours 48 min ago

When it comes to YouTube subscriber counters, there’s not much wiggle room for creativity. Sure, you can go with Nixies or even more exotic displays, but in the end a counter is just a bunch of numbers.

But [Brian Lough] found a way to jazz things up with this Tetris-playing YouTube sub counter. For those of you not familiar with [Brian]’s channel, it’s really worth a watch. He tends toward long live-stream videos where he works on one project for a marathon session, and there’s a lot to learn from peeking over his virtual shoulder. This project stems from an earlier video, posted after the break, which itself was a condensation of several sessions hacking with the RGB matrix that would form the display for this project. He’s become enamored of the cheap and readily-available 64×32 pixel RGB displays, and borrowing an idea from Mc Lighting author [toblum], he decided that digits being assembled from falling Tetris blocks would be a nice twist. [Brian] had to port the Tetris-ifying code to Arduino before getting the ESP8266 to do the work of getting the subs and updating the display. We think the display looks great, and the fact that the library is open and available means that you too can add Tetris animations to your projects.

None of this is to say that more traditional sub counters can’t be cool too. From a minimalist display to keeping track of all your social media, good designs are everywhere. And adding a solid copper play button is a nice touch too.

Calm Down: It’s Only Assembly Language

Tue, 06/19/2018 - 22:00

Based on [Ben Jojo’s] title — x86 Assembly Doesn’t have to be Scary — we assume that normal programmers fear assembly. Most hackers don’t mind it, but we also don’t often have an excuse to program assembly for desktop computers.

In fact, the post is really well suited for the typical hacker because it focuses the on real mode of an x86 processor after it boots. What makes this tutorial a little more interesting than the usual lecture is that it has interactive areas, where a VM runs your code in the browser after assembling with NASM.

We really like that format of reading a bit and then playing with some code right in the browser. There is something surreal about watching a virtual PC booting up inside your browser. Yeah, we’ve seen it before, but it still makes our eyebrows shoot up a little.

We hope he’ll continue this as a series, because right now it stops after talking about a few BIOS functions. We’d love to see more about instructions, indexing, string prefixes, and even moving to code that would run under Linux or Windows. It would be nice, too, if there was some information about setting up a local environment. Now if you want to make a serious investment and you use Linux, this book is a lot to chew on but will answer your questions.

Of course, there are many tutorials, but this is a fun if brief introduction. If you want to know more about assembly outside the browser, we covered that. If you really want to write a real bootloader, there’s help for that, too.

Refurbishing A DEC 340 Monitor

Tue, 06/19/2018 - 19:00

Back in the “good old days” movie theaters ran serials. Every week you’d pay some pocket change and see what happened to Buck Rogers, Superman, or Tex Granger that week. Each episode would, of course, end in a cliffhanger. [Keith Hayes] has started his own serial about restoring a DEC 340 monitor found in a scrap yard in Australia. The 340 — not a VT340 — looks like it could appear in one of those serials, with its huge cabinets and round radar-like display. [Keith] describes the restoration as “his big project of the year” and we are anxious to see how the cliffhangers resolve.

He’s been lucky, and he’s been unlucky. The lucky part is that he has the cabinet with the CRT and the deflection yoke. Those would be very difficult to replace. The unlucky part is that one entire cabinet of electronics is missing.

Keep in mind, this monitor dates from the 1960s when transistors were fairly new. The device is full of germanium transistors and oddball silicon transistors that are unobtainable. A great deal of the circuitry is on “system building block” cards. This was a common approach in those days, to create little PC boards with a few different functions and build your circuit by wiring them together. Almost like a macro-scale FPGA with wire backplanes as the programming.

Even if some of the boards were not missing, there would be some redesign work ahead. The old DEC machine used a logic scheme that shifted between ground and a negative voltage. [Keith] wants to have a more modern interface into the machine so the boards that interface with the outside world will have to change, at least. It sounds like he’s on his way to doing a modern remake of the building block cards for that reason, and to preserve the originals which are likely to be difficult to repair.

The cliffhanger to this first installment is a brief description of what one of the system building block cards looks like. The 1575 holds 8 transistors and 11 diodes. It’s apparently an analog building block made to gate signals from the monitor’s digital to analog converters to other parts of the circuit. You’ll have to tune into the next episode to hear more of his explanation.

If you want to read about how such a thing was actually used, DECUS had a programming manual that you can read online. Seeing the round monitor made us think of the old PDP-1 that lives at the Computer History Museum. We are sure it had lots of practical uses, but we think of it as a display for Spacewar.

Laser Cutter Turns Scrapped To Shipped

Tue, 06/19/2018 - 16:00

We’ll go way out on a limb here and say you’ve probably got a ridiculous amount of flattened cardboard boxes. We’re buying more stuff online than ever before, and all those boxes really start to add up. At the least we hope they’re making it to the recycling bin, but what about reusing them? Surely there’s something you could do with all those empty shipping boxes…

Here’s a wild idea…why not use them to ship things? But not exactly as they are, unless you’re in the business of shipping big stuff, the probably won’t do you much good as-is. Instead, why not turn those big flattened cardboard boxes into smaller, more convenient, shippers? That’s exactly what [Felix Rusu] has done, and we’ve got to say, it’s a brilliant idea.

[Felix] started by tracing the outline of the USPS Priority Small Flat Rate Box, which was the perfect template as it comes to you flat packed and gets folded into its final shape. He fiddled with the design a bit, and in the end had a DXF file he could feed into his 60W CO2 laser cutter. By lowering the power to 15% on the fold lines, the cutter is even able to score the cardboard where it needs to fold.

Assuming you’ve got a powerful enough laser, you can now turn all those Amazon Prime boxes into the perfect shippers to use when your mom finally makes you sell your collection of Yu-Gi-Oh! cards on eBay. Otherwise, you can just use them to build a wall so she’ll finally stay out of your side of the basement.

[Thanks to Adrian for the tip.]

Printing Strain Wave Gears

Tue, 06/19/2018 - 14:30

We just wrapped up the Robotics Module Challenge portion of the Hackaday Prize, and if there’s one thing robots need to do, it’s move. This usually means some sort of motor, but you’ll probably want a gear system on there as well. Gotta have that torque, you know.

For his Hackaday Prize entry, [Johannes] is building a 3D printed Strain Wave Gear. A strain wave gear has a flexible middle piece that touches an outer gear rack when pushed by an oval central rotor. The difference in the number of teeth on the flexible collar and the outer rack determine the gear ratio.

This gear is almost entirely 3D printed, and the parts don’t need to be made of flexible filament or have weird support structures. It’s printed out of PETG, which [Johannes] says is slippery enough for a harmonic drive, and the NEMA 17 stepper is completely contained within the housing of the gear itself.

Printing a gear system is all well and good, but what do you do with it? As an experiment, [Johannes] slapped two of these motors together along with a strange, bone-like adapter to create a pan/tilt mount for a camera. Yes, if you don’t look at the weird pink and blue bone for a second, it’s just a DSLR on a tripod with a gimbal. The angular resolution of this setup is 0.03 degrees, so it should be possible to use this setup for astrophotography. Impressive, even if that particular implementation does look a little weird.

The HackadayPrize2018 is Sponsored by:





Federico Faggin: The Real Silicon Man

Tue, 06/19/2018 - 13:01

While doing research for our articles about inventing the integrated circuit, the calculator, and the microprocessor, one name kept popping which was new to me, Federico Faggin. Yet this was a name I should have known just as well as his famous contemporaries Kilby, Noyce, and Moore.

Faggin seems to have been at the heart of many of the early advances in microprocessors. He played a big part in the development of MOS processors during the transition from TTL to CMOS. He was co-creator of the first commercially available processor, the 4004, as well as the 8080. And he was a co-founder of Zilog, which brought out the much-loved Z80 CPU. From there he moved on to neural networking chips, image sensors, and is active today in the scientific study of consciousness. It’s time then that we had a closer look at a man who’s very core must surely be made of silicon.

Learning Electronics Federico at Olivetti, middle-right. Photo: intel4004.com

Faggin was born in 1941 in Vicenza, Italy. From an early age, he formed an interest in technology, even attending a technical high school.

After graduating at age 19 in 1961, he got a short-term job at the Olivetti Electronics Laboratory. There he worked on a small experimental digital transistor computer with a 4096 word, 12-bit magnetic core memory and an approximately 1000 logic gate CPU. After his boss had a serious car accident, Faggin took over as project leader. The job was a great learning experience for his future career.

He next studied physics at the University of Padua where he graduated summa cum laude in 1965. He stayed on for a year teaching electronics to 3rd-year students.

Creating MOS Silicon Gate Technology (SGT) At Fairchild

In 1967 he started work at SGS-Fairchild, now STMicroelectronics, in Italy. There he developed their first MOS (metal-oxide-semiconductor) silicon gate technology (SGT) and their first two commercial MOS ICs. They then sent him to Silicon Valley in California to work at Fairchild Semiconductor in 1968.

During the 1960s, logic for ICs was largely done using TTL (Transistor-Transistor Logic). The two ‘T’s refer to using bipolar junction transistors for the logic followed by one or more transistors for the amplification. TTL was fast but took a lot of room, restricting how much could fit into an IC. TTL microprocessors also consumed a lot of power.

MOSFET, by CyrilB CC-BY-SA 3.0

On the other hand, ICs containing MOSFETs had manufacturing problems that lead to inconsistent and variable speeds as well as lower speeds than was theoretically possible. If those problems could be solved then MOS would be a good substitute for TTL on ICs since more could be crammed into a smaller space. MOSFETs also required far less power.

In the mid-1960s, to make an aluminum gate MOSFET, the source and drain regions would first be defined and doped, followed by the gate mask defining the thin-oxide region, and lastly the aluminum gate over the thin-oxide.

However, the gate mask would inevitably be misaligned in relation to the source and drain masks. The workaround for this misalignment was to make the thin-oxide region large enough to ensure that it overlapped both the source and drain. But this led to gate-to-source and gate-to-drain parasitic capacitance which was both large and variable and was the source of the speed problems.

Faggin and and the rest of his team at Fairchild worked on these problems between 1966 and 1968. Part of the solution was to define the gate electrode first and then use that as a mask to define the source and gate regions, minimizing the parasitic capacitances. This was called the self-aligned gate method. However, the process for making self-aligned gates raised issues with using aluminum for the gate electrode. This was solved by switching to amorphous silicon instead. This self-aligned gate solution had been worked on but not to the point where ICs could be manufactured for commercial purposes.

Faggin and Tom Klein At Fairchild in 1967, Credit: Fairchild Camrea & Instrument Corporation

In 1968, Faggin was put in charge of developing Fairchild’s self-aligned gate MOS process technology. He first worked on a precision etching solution for the amorphous silicon gate and then created the process architecture and steps for fabricating the ICs. He also invented buried contacts, a technique which further increased the density through the use of an additional layer making direct ohmic connections between the polysilicon gate and the junctions.

These techniques became the basis of Fairchild’s silicon gate technology (SGT), which was widely used by industry from then on.

Faggin went on to make the first silicon-gate IC, the Fairchild 3708. This was a replacement for the 3705, a metal-gate IC implementing an 8-bit analog multiplexor with decoding logic and one which they had trouble making due to strict requirements. During its development, he further refined the process by using phosphorus gettering to soak up impurities and by substituting the vacuum-evaporated amorphous silicon with polycrystalline silicon applied using vapor-phase deposition.

The resulting SGT meant more components could fit on the IC than with TTL and power requirements were lower. It also gave a three to five times speed improvement over the previous MOS technology.

Making The First Microprocessors At Intel Intel C4004 by Thomas Nguyen CC BY-SA 4.0

Faggin left Fairchild to join the two-year-old Intel in 1970 in order to do the chip design for the MCS-4 (Micro Computer System) project. The goal of the MCS-4 was to produce four chips, initially for use in a calculator.

One of those chips, the 4004, became the first commercially available microprocessor. The SGT which he’d developed at Fairchild allowed him to fit everything onto a single chip. You can read all the details of the steps and missteps toward that invention in our article all about it. Suffice it to say that he succeeded and by March 1971, all four-chips were fully functional.

Faggin’s design methodology was then used for all the early Intel microprocessors. That included the 8-bit 8008 introduced in 1972 and the 4040, an improved version of the 4004 in 1974, wherein Faggin took a supervisory role.

Meanwhile, Faggin and Masatoshi Shima, who also worked on the 4004, both developed the design for the 8080. It was released in 1974 and was the first high-performance 8-bit microprocessor.

Creating The Z80

In 1974, Faggin left Intel to co-found Zilog with Ralph Ungermann to focus on making microprocessors. There he co-designed the Z80 with Shima, who joined him from Intel. The Z80 was software compatible with the 8080 but was faster and had double the number of registers and instructions.

The Z80 went on to be one of the most popular CPUs for home computers up until the mid-1980s, typically running the CP/M OS. Some notable computers were the Heathkit H89, the Osborne 1, the Kaypro series, a number of TRS-80s, and some of the Timex/Sinclair computers. The Commodore 128 used one alongside the 8502 for CP/M compatibility and a number of computers could use it as an add-on. My own experience with it was through the Dy4.

This is a CPU which no doubt many Hackaday readers will have fond memories of and still build computers around to this day, one such example being this Z80 Raspberry Pi look-alike.

The Z80, as well as the Z8 microcontroller conceived of by Faggin are still in production today.

The Serial Entrepreneur

After leaving Zilog, in 1984, Faggin created his second startup, Cygnet Technologies, Inc. There he conceived of the Communication CoSystem, a device which sat between a computer and a phone line and allowed transmission and receipt of both voice and data during the same session.

In 1986 he co-founded Synaptics along with Carver Mead and became CEO. Initially, they did R&D in artificial neural networks and in 1991, produced the I1000, the first single-chip optical character recognizer. In 1994 they introduced the touchpad, followed by early touchscreens.

Between 2003 and 2008, Faggin was president and CEO of Foveon where he redirected their business into image sensors.

At the Computer History Museum, by Dicklyon CC-BY-SA 4.0 Awards And Present Day

Faggin received many awards and prizes including the Marconi Prize, the Kyoto Prize for Advanced Technology, Fellow of the Computer History Museum, and the 2009 National Medal of Technology and Innovation given to him by President Barak Obama. In 1996 he was inducted into the National Inventor’s Hall of Fame for co-inventing the microprocessor.

In 2011 he and his wife founded the Federico and Elvia Faggin Foundation, a non-profit organization supporting research into consciousness through theoretical and experimental research, an interest he gained from his time at Synaptics. His work with the Foundation is now his full-time activity.

He still lives in Silicon Valley, California where he and his wife moved to from Italy in 1968. A fitting home for the silicon man.

Lawn From Hell Saved by Mower From Heaven

Tue, 06/19/2018 - 11:30

It’s that time of year again, at least in the northern hemisphere. Everything is alive and growing, especially that narrow-leafed non-commodity that so many of us farm without tangible reward. [sonofdodie] has a particularly hard row to hoe—his backyard is one big, 30° slope of knee-ruining agony. After 30 years of trudging up and down the hill, his body was telling him to find a better way. But no lawn service would touch it, so he waited for divine inspiration.

And lo, the answer came to [sonofdodie] in a trio of string trimmers. These Whirling Dervishes of grass grazing are mounted on a wheeled plywood base so that their strings overlap slightly for full coverage. Now he can sit in the shade and sip lemonade as he mows via rope and extension cord using a mower that cost about $100 to build.

These heavenly trimmers have been modified to use heavy nylon line, which means they can whip two weeks’ worth of rain-fueled growth with no problem. You can watch the mower shimmy down what looks like the world’s greatest Slip ‘n Slide hill after the break.

Yeah, this video is two years old, but somehow we missed it back then. Ideas this fresh that tackle age-old problems are evergreen, unlike these plots of grass we must maintain. There’s more than one way to skin this ecological cat, and we’ve seen everything from solar mowers to robotic mowers to mowers tied up to wind themselves around a stake like an enthusiastic dog.

Thanks for the tip, [Itay]!

What is Our Martian Quarantine Protocol?

Tue, 06/19/2018 - 10:01

If you somehow haven’t read or watched War of the Worlds, here’s a spoiler alert. The Martians are brought down by the common cold. You can argue if alien biology would be susceptible to human pathogens, but if they were, it wouldn’t be surprising if aliens had little defense against our bugs. The worrisome part of that is the reverse. Could an astronaut or a space probe bring back something that would ravage the Earth with some disease? This not science fiction, it is both a historically serious question and one we’ll face in the near future. If we send people to Mars are they going to come back with something harmful?

A Bit of News: Methane Gas Fluctuations on Mars

What got me thinking about this was the mounting evidence that there could be life on Mars. Not a little green man with a death ray, but perhaps microbe-like life forms. In a recent press release, NASA revealed that they not only found old organic material in rocks, but they also found that methane gas is present on Mars and the amount varies based on the season with more methane occurring in the summer months. There’s some dispute about possible inorganic reasons for this, but it is at least possible that the variation is due to increased biological activity during the summer.

These aren’t the first potential signs of life on Mars, either. In 1996, David McKay, Everett Gibson, and Kathie Thomas-Keprta from the Johnson Space Center announced they had found microbial fossils in a piece of meteorite that originated on Mars (see picture, below). The scientific community came up with a lot of alternative explanations, but to this day we don’t know conclusively if it is evidence of Martian life or just an inorganic process.

History Repeats

So far, there’s nothing really worrisome about Martian microbes because they are far away. But we have had contact with one other extraterrestrial body already: the moon. If you think the moon landing was fake, you’ve clearly overestimated the ability of the government to keep a secret. In 1969, two astronauts who had walked on the moon returned to Earth. Would they go down in history as modern-day typhoid Mary’s?

There was a very low chance that the moon harbored any sort of dangerous microbes, but there was a chance. And the price to pay for being wrong could have been very high, so NASA erred on the side of caution. That’s how the Mobile Quarantine Facility (MQF) came into being.

Chillin’ in an RV

If you have ever seen an Airstream trailer, you’d recognize the MQF. It was a 35-foot trailer that had no wheels but did have an elaborate air filtering system. Once the Apollo capsule landed in the ocean, the recovery crew threw down isolation suits the crew put on until they were brought to USS Hornet where they were installed in the MQF.

Although 35 feet doesn’t sound very big for three men, it was spacious compared to the lunar capsule they’d been in. Actually, there were five men inside — an engineer and a doctor were sealed in with the crew for the 65-hour observation period. Carried by airplane and truck, the MQF made its way to Pearl Harbor and then Houston. Once in Houston, the space travelers were released for two more weeks of isolation in the Lunar Receiving Lab.

The same procedure was in place until Apollo 15. Of course, Apollo 13 didn’t reach the moon, so it didn’t use the MQF either. Looking back on it, it seems almost silly that there was so much concern.

Now There’s Mars

It might seem silly now, but back then the logic was sound. First and foremost, if there was any chance at all, you had to be sure. Being wrong would have been devastating — possibly even killing everyone on Earth. Second, this was a well-funded and highly-visible government project so there was some political need to look cautious and a lot of money available to do so.

Mars may be a different story. NASA isn’t funded like it used to be. Elon Musk’s company may get there first, or maybe another country will go. We are at a time when people aren’t as careful as they used to be, in a lot of areas. Will we take precautions against a Martian plague?

Even if you think native Martian life is not likely (or not likely to want to feast on humans and other Earth creatures), there’s another concern. In fact, it may be more likely and more likely to be deadly. It turns out that despite NASA’s unwillingness to go out on a limb and say there is life on Mars, we know that there is almost certainly some life on Mars. We know that because we brought it there ourselves.

Every spacecraft we send to the red planet — or anywhere — will have some amount of Earth life within it. The process even has a name: forward contamination. NASA has a planetary protection officer that ensures the Committee on Space Research (COSPAR) standards are met. This international standard requires that all space exploring nations limit the chance of contamination to 1 in 1,000 total. It even goes as far as to allocate that total to different nations. The US is allowed a 1 in 40,000 chance.

The initial Viking probes were baked almost sterile, but after discovering the harsh environment on the Martian surface, future probes were given more latitude. However, more recent research has shown that on Earth, microbes can live under some hellish conditions, so there is some likelihood we’ve already contaminated Mars.

Don’t think the microbes would survive the ride to Mars? Think again. Experiments on the International Space Station confirm that the cleaning done by NASA leaves only the hardiest strains of bacteria and it appears that at least some could hibernate until they found the right conditions for life. In fact, under current COSPAR rules, if NASA’s Curiosity probe finds water, it can’t get near it because it is not clean enough.

Double Threat

So there are really two threats. Native Martian microbes hitching a ride to Earth, or mutated Earth organisms catching a lift back to their home planet. Either way could have serious consequences. The same COSPAR group that dictates how many microbes you can take to Mars and other planets also specifies how to quarantine things coming back. The United States actually had the “Extra-Terrestrial Exposure Law” at the time of the moon landings, but that was removed in 1991. So it isn’t clear if a private entity like Musk would be required to follow any such procedure.

Of course, this is Hackaday, so what’s the hacker angle to all this? In my opinion, part of the problem here is defining life. What’s alive and what’s not? Like Justice Potter Stewart said about pornography, “I know it when I see it.” On Star Trek it was easy to “scan” a planet and announce you’d found life forms. But what does that mean exactly? There have been cases found of inorganic matter self-organizing. There are macromolecular systems that self-replicate. There is no assurance that alien life would be based on the carbon chemistry we associate with life.

A few decades of scientists haven’t figured that out yet. Maybe its time we took a crack at it. How can you detect life? For safety, how much life do you need to detect? Microbial life? Is it possible that inorganic life (e.g., silicon-based life) would not be harmful to people? Are we sure? Even just determining Earth-like life — preferably over at least a short distance — would be a great benefit to science.  If you want some reading on that topic, stop off at NASA’s astrobiology web site.

All the images in this post are from NASA, as you might expect. If you want to see more about the MQF, Airstream has an interesting video with a few internal details of the facility’s construction. You can see that video, below.

An Arduino Powered Tank Built To Pull Planes

Tue, 06/19/2018 - 07:00

Surely our readers are well aware of all the downsides of owning an airplane. Certainly the cost of fuel is a big one. Birds are a problem, probably. That bill from the traveling propeller sharpener is a killer too…right? Alright fine, we admit it, nobody here at Hackaday owns an airplane. But probably neither do most of you; so don’t look so smug, pal.

But if you did own a plane, or at least work at a small airport, you’d know that moving the things around on the ground is kind of a hassle. Smaller planes can be pulled by hand, but once they get up to a certain size you’ll want some kind of vehicle to help out. [Anthony DiPilato] wanted a way to move around a roughly 5,200 pound Cessna 310, and decided that all the commercial options were too expensive. So he built his own Arduino powered tank to muscle the airplane around the tarmac, and his journey from idea to finished product is absolutely fascinating to see.

So the idea here is pretty simple. A little metal cart equipped with two beefy motors, an Arduino Mega, a pair of motor controllers, and a HC-08 Bluetooth module so you can control it from your phone. How hard could it be, right? Well, it turns out combining all those raw components into a little machine that’s strong enough to tow a full-scale aircraft takes some trial and error.

It took [Anthony] five iterations before he fine tuned the design to the point it was able to successfully drag the Cessna without crippling under the pressure. The early versions featured wheels, but eventually it was decided that a tracked vehicle would be required to get enough grip on the blacktop. Luckily for us, each failed design is shown along with a brief explanation about what went wrong. Admittedly it’s unlikely any of us will be recreating this particular project, but we always love to see when somebody goes through the trouble of explaining what went wrong. When you include that kind of information, somewhere, somehow, you’re saving another maker a bit of time and aggravation.

Hackers absolutely love machines with tank treads. From massive 3D printed designs to vaguely disturbing humanoid robots, there’s perhaps no sweeter form of locomotion in the hacker arsenal.

Tables are Turned as Robots Assemble IKEA Furniture

Tue, 06/19/2018 - 04:00

Hackaday pages are rife with examples of robots being built with furniture parts. In this example, the tables are turned and robots are the masters of IKEA pieces. We are not silly enough to assume that these robots unfolded the instructions, looked at one another, scratched their CPUs, and began assembling. Of course, the procedure was preordained by the programmers, but the way they mate the pegs into the ends of the cross-members is a very human thing to do. It reminds us of finding a phone charging socket in the dark. This kind of behavior is due to force feedback which tell the robots when a piece is properly seated which means that they can use vision to fit the components together without sub-millimeter precision.

All the hardware used to make the IKEA assembler is publicly available, and while it may be out of the typical hacker price range, this is a sign of the times as robots become part of the household. Currently, the household robots are washing machines, smart speakers, and 3D printers. Ten years ago those weren’t Internet connected machines so it should be no surprise if robotic arms join the club of household robots soon. Your next robotics project could be the tipping point that brings a new class of robots to the home.

Back to our usual hijinks, here is a robot arm from IKEA parts and a projector built into a similar lamp. or a 3D printer enclosed in an IKEA cabinet for a classy home robot.

Thank you, [Itay] for another great tip.

Hybrid Lab Power Supply From Broken Audio Amp

Tue, 06/19/2018 - 01:00

The lab power supply is an essential part of any respectable electronics workbench. However, the cost of buying a unit that has all the features required can be eye-wateringly high for such a seemingly simple device. [The Post Apocalyptic Inventor] has showed us how to build a quality bench power supply from the guts of an old audio amplifier.

We’ve covered our fair share of DIY power supplies here at Hackaday, and despite this one being a year old, it goes the extra mile for a number of reasons. Firstly, many of the expensive and key components are salvaged from a faulty audio amp: the transformer, large heatsink and chassis, as well as miscellaneous capacitors, pots, power resistors and relays. Secondly, this power supply is a hybrid. As well as two outputs from off-the-shelf buck and boost converters, there is also a linear supply. The efficiency of the switching supplies is great for general purpose work, but having a low-ripple linear output on tap for testing RF and audio projects is really handy.

The addition of the linear regulator is covered in a second video, and it’s impressively technically comprehensive. [TPAI] does a great job of explaining the function of all the parts which comprise his linear supply, and builds it up manually from discrete components. To monitor the voltage and current on the front panel, two vintage dial voltmeters are used, after one is converted to an ammeter. It’s these small auxiliary hacks which make this project stand out – another example is the rewiring of the transformer secondary and bridge rectifier to obtain a 38V rail rated for twice the original current.

The Chinese DC-DC switching converters at the heart of this build are pretty popular these days, in fact we’re even seeing open source firmware being developed for them. If you want to find out more about how they operate on a basic level, here’s how a buck converter works, and also the science behind boost converters.

The Colpitts Oscillator Explained

Mon, 06/18/2018 - 22:00

The Colpitts oscillator is a time-tested design — from 1918. [The Offset Volt] has a few videos covering the design of these circuits including an op-amp and a transistor version. You can find the videos below.

You can tell a Colpitts oscillator by the two capacitors in the feedback circuit. The capacitors form an effective capacitance for the circuit (assuming you have C1 and C2) of the product of C1 and C2 divided by the sum of the two capacitors. The effective capacitance and the inductance form a bandpass filter that is very sharp at the frequency of interest, allowing the amplifier to build up oscillations at that frequency.

It is unusual to show an op-amp oscillator, and it is interesting to think about the design changes and limitations discussed in the video. The video isn’t just theoretical. He also builds the circuit and looks at the real world performance.

It is also interesting to look at the difference between the op amp and the bipolar circuit. Of course, you can use other active devices like a FET, too. This is also an important circuit when a crystal is part of the feedback circuit instead of an inductor.

We see a lot of these as low-power ham radio transmitters. If you want to see a different oscillator design, we talked about Pierce oscillators before.

Raytheon’s Analog Read-Only Memory is Tube-Based

Mon, 06/18/2018 - 19:01

There are many ways of storing data in a computer’s memory, and not all of them allow the computer to write to it. For older equipment, this was often a physical limitation to the hardware itself. It’s easier and cheaper for some memory to be read-only, but if you go back really far you reach a time before even ROMs were widespread. One fascinating memory scheme is this example using a vacuum tube that stores the characters needed for a display.

[eric] over at TubeTime recently came across a Raytheon monoscope from days of yore and started figuring out how it works. The device is essentially a character display in an oscilloscope-like CRT package, but the way that it displays the characters is an interesting walk through history. The monoscope has two circuits, one which selects the character and the other determines the position on the screen. Each circuit is fed a delightfully analog sine wave, which allows the device to create essentially a scanning pattern on the screen for refreshing the display.

[eric] goes into a lot of detail on how this c.1967 device works, and it’s interesting to see how engineers were able to get working memory with their relatively limited toolset. One of the nice things about working in the analog world, though, is that it’s relatively easy to figure out how things work and start using them for all kinds of other purposes, like old analog UHF TV tuners.

Arduino Watchdog Sniffs Out Hot 3D Printers

Mon, 06/18/2018 - 16:00

We know we’ve told you this already, but you should really keep a close eye on your 3D printer. The cheaper import machines are starting to display a worrying tendency to go up in flames, either due to cheap components or design flaws. The fact that it happens is, sadly, no longer up for debate. The best thing we can do now is figure out ways to mitigate the risk for all the printers that are already deployed in the field.

At the risk of making a generalization, most 3D printer fires seem to be due to overheating components. Not a huge surprise, of course, as parts of a 3D printer heat up to hundreds of degrees and must remain there for hours and hours on end. Accordingly, [Bin Sun] has created a very slick device that keeps a close eye on the printer’s temperature at various locations, and cuts power if anything goes out of acceptable range.

The device is powered by an Arduino Nano and uses a 1602 serial LCD and KY040 rotary encoder to provide the user interface. The user can set the shutdown temperature with the encoder knob, and the 16×2 character LCD will give a real-time display of current temperature and power status.

Once the user-defined temperature is met or exceeded, the device cuts power to the printer with an optocoupler relay. It will also sound an alarm for one minute so anyone in the area will know the printer needs some immediate attention.

We’ve recently covered a similar device that minimizes the amount of time the printer is powered on, but checking temperature and acting on it in real-time seems a better bet. No matter what, we’d still suggest adding a smoke detector and fire extinguisher to your list of essential 3D printer accessories.

Buttery Smooth Fades with the Power of HSV

Mon, 06/18/2018 - 13:01

In firmware-land we usually refer to colors using RGB. This is intuitively pleasing with a little background on color theory and an understanding of how multicolor LEDs work. Most of the colorful LEDs we are use not actually a single diode. They are red, green, and blue diodes shoved together in tight quarters. (Though interestingly very high end LEDs use even more colors than that, but that’s a topic for another article.) When all three light up at once the emitted light munges together into a single color which your brain perceives. Appropriately the schematic symbol for an RGB LED without an onboard controller typically depicts three discrete LEDs all together. So it’s clear why representing an RGB LED in code as three individual values {R, G, B} makes sense. But binding our representation of color in firmware to the physical system we accidentally limit ourselves.

The inside of an RGB LED

Last time we talked about color spaces, we learned about different ways to represent color spatially. The key insight was that these models called color spaces could be used to represent the same colors using different groups of values. And in fact that the grouped values themselves could be used to describe multidimensional spacial coordinates. But that post was missing the punchline. “So what if you can represent colors in a cylinder!” I hear you cry. “Why do I care?” Well, it turns out that using colorspace can make some common firmware tasks easier. Follow on to learn how!

Our friend the HSV Cylinder by [SharkD]For the rest of this post we’re going to work in the HSV color space. HSV represents single colors as combinations of hue, saturation, and value. Hue is measured in degrees (0°-359°) of rotation and sets the color. Saturation sets the intensity of the color; removing saturation moves towards white, while adding it moves closer to the set hue. And value sets how much lightness there is; a value of 0 is black, whereas maximum value is the lightest the most intense the color can be. This is all a little difficult to describe textually, but take a look at the illustration to the left to see what I mean.

So back again to “why do I care?” Making the butteriest smooth constant brightness color fades is easy with HSV. Trivial. Want to know how to do it? Increment your hue. That’s it. Just increment the hue and the HSV -> RGB math will take care of the rest. If you want to fade to black, adjust your saturation. If you want to perceive true constant brightness or get better dynamic range from your LEDs, that’s another topic. But for creating a simple color fade all you need is HSV and a single variable.

Avoid Strange Fades A linear interpolation from green to pink

“But RGB color fades are easy!” you say. “All I need to do is fade R and G and B and it works out!” Well actually, they aren’t quite as simple as that makes them appear. The naive way to fade between RGB colors would be exactly what was described, a linear interpolation (a LERP). Take your start and end colors, calculate the difference in each channel, slice those differences into as many frames as you want your animation to last, and done. During each frame add or subtract the appropriate slice and your color changes. But let’s think back to the color cube. Often a simple LERP like this will work fine, but depending on the start and end points you can end up with pretty dismal colors in the middle of the fade. Check out this linear fade between bright green and hot pink. In the middle there is… gray. Gray!?

So what causes those strange colors to show up? Think back to the RGB cube. By adjusting red, green, and blue at once we’re traversing the space inside the cube between two points in space. In the case of the example green/pink fade the interpolation takes us directly through the center of the cube where grey lives. If every point inside the cube represents a unique mixture of red, green, and blue we’re going to get, well, every color. Some of that space has colors that you probably don’t want to show up on your 40 meter light strip. Somewhere in that cube is murky brown.

But this can be avoided! All you have to do is traverse the colorspace intelligently. In RGB that probably means adjusting channels one or two at a time and trying to avoid going through the mid-cube badlands. For the sample green to pink fade we can break it into two pieces; a fade from green to blue, then a fade from blue to pink. Check out the split LERP on the right to see how it looks. Not too bad, right? At least there is no grey anymore. But that was a pretty complex way to get a boring fade to work. Fortunately we already know about the better way to do it.

A LERP in HSV

How does this fade look in HSV? Well there’s only one channel to interpolate – hue. If we convert the two sample RGB values into HSV we get bright green at {120°, 100%, 100%} for the start and pink at {300°, 100%, 100%} for the end. Do we add or subtract to go between them? It doesn’t actually matter, though often you may want to interpolate as quickly as possible (in which case you want to traverse the shortest distance). It’s worth noting that 0° and 359° are adjacent, so it’s safe to overflow or underflow the degree counter to travel the shortest absolute distance. In the case of green/pink it is equally fast to count up from 120° to 300° as it is to count down from 120° to 300° (passing through 0°). Assuming we count upwards it looks like the figure on the left. Nice, right? Those bland grays have been replaced by perky shade of blue.

There are a couple other nice side effects of using HSV like this. One is that, as long as you don’t care about changing brightness, some animations can be very memory efficient. You only need one byte per pixel! Though that does prevent you from showing black and white, so you’d need an extra byte or two for those (not every colorspace is perfect). Changing a single parameter also makes it easy to experiment with non-linear easing to adjust how you approach a color setpoint, which can lead to some nice effects.

If you want to experiment with HSV, here are a couple files I’ve used in the past. No guarantees about efficiency or accuracy, but I’ve built hundreds of devices that used them and things seem to work ok.

There’s one more addendum here, and that’s that color is nothing if not an extremely complex topic. This post is just the barest poke into one corner of color theory and does not address a range of concerns about gamma/CIE correction, apparent brightness of individual colors, and more. This was what I needed to improve my RGB blinkenlights, not invent a new Pantone. If accurate color is an interesting topic to you, dig in and tell us what you learn!

Watch The World Spin With The Earth Clock

Mon, 06/18/2018 - 11:31

With the June solstice right around the corner, it’s a perfect time to witness first hand the effects of Earth’s axial tilt on the day’s length above and beyond 60 degrees latitude. But if you can’t make it there, or otherwise prefer a more regular, less deprived sleep pattern, you can always resort to simulations to demonstrate the phenomenon. [SimonRob] for example built a clock with a real time rotating model of Earth to visualize its exposure to the sun over the year.

The daily rotating cycle, as well as Earth’s rotation within one year, are simulated with a hand painted plastic ball attached to a rotating axis and mounted on a rotating plate. The hand painting was done with a neat trick; placing printed slivers of an atlas inside the transparent orb to serve as guides. Movement for both axes are driven by a pair of stepper motors and a ring of LEDs in the same diameter as the Earth model is used to represent the Sun. You can of course wait a whole year to observe it all in real time, or then make use of a set of buttons that lets you fast forward and reverse time.

Earth’s rotation, and especially countering it, is a regular concept in astrophotography, so it’s a nice change of perspective to use it to look onto Earth itself from the outside. And who knows, if [SimonRob] ever feels like extending his clock with an aurora borealis simulation, he might find inspiration in this northern lights tracking light show.

This is a spectacular showpiece and a great project you can do with common tools already in your workshop. Once you’ve mastered earth, put on your machinists hat and give the solar system a try.

Fatalities vs False Positives: The Lessons from the Tesla and Uber Crashes

Mon, 06/18/2018 - 10:01

In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?

There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.

But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.

The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.

Tesla Cars Drive Into Stopped Objects

Let’s start with the Tesla crash. Just before the crash, the car was following behind another using its traffic-aware cruise control which attempts to hold a given speed subject to leaving appropriate following distance to the car ahead of it. As the Tesla approached an exit ramp, the car ahead kept right and the Tesla moved left, got confused by the lane markings on the lane split, and accelerated into its programmed speed of 75 mph (120 km/h) without noticing the barrier in front of it. Put simply, the algorithm got things wrong and drove into a lane divider at full speed.

To be entirely fair, the car’s confusion is understandable. After the incident, naturally, many Silicon Valley Tesla drivers recreated the “experiment” in their own cars and posted videos on YouTube. In this one, you can see that the right stripe of the lane-split is significantly harder to see than the left stripe. This explains why the car thought it was in the lane when it was actually in the “gore” — the triangular keep-out-zone just before an off-ramp. (From that same video, you can also see how any human driver would instinctively follow the car ahead and not be pulled off track by some missing paint.)

More worryingly, a similar off-ramp in Chicago fools a Tesla into the exact same behavior (YouTube, again). When you place your faith in computer vision, you’re implicitly betting your life on the quality of the stripes drawn on the road.

As I suggested above, the tough question in the Tesla accident is why its radar didn’t override and brake in time when it saw the concrete barrier. Hints of this are to be found in the January 2018 case of a Tesla rear-ending a stopped firetruck at 65 mph (105 km/h) (!), a Tesla hitting a parked police car, or even the first Tesla fatality in 2016, when the “Autopilot” drove straight into the side of a semitrailer. The telling quote from the owner’s manual: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h)…” Indeed.

Tesla’s algorithm likely doesn’t trust the radar because the radar data is full of false positives. There are myriad non-moving objects on the highway: street signs, parked cars, and cement lane dividers. Angular resolution of simple radars is low, and this means that at speed, the radar also “sees” the stationary objects that the car is not going to hit anyway. Because of this, to prevent the car from slamming on the brakes at every streetside billboard, Tesla’s system places more weight on the visual information at speed. Because Tesla’s “Autopilot” is not intended to be self-driving solution, they can hide behind the fig leaf that the driver should have seen it coming.

Uber Disabled Emergency Braking

In contrast to the Tesla accident, where the human driver could have saved himself from the car’s blindness, the Uber car probably could have braked in time to prevent the accident entirely where a human driver couldn’t have. The LIDAR system picked up the pedestrian as early as six seconds before impact, variously classifying her as an “unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” Even so, when the car was 1.3 seconds and 25 m away from impact, the Uber system was sure enough that it concluded that emergency braking maneuvers were needed. Unfortunately, they weren’t engaged.

Braking from 43 miles per hour (69 km/h) in 25 m (82 ft) is just doable once you’ve slammed on the brakes on a dry road with good tires. Once you add in average human reaction times to the equation, however, there’s no way a person could have pulled it off. Indeed, the NTSB report mentions that the driver swerved less than a second before impact, and she hit the brakes less than a second after. She may have been distracted by the system’s own logging and reporting interface, which she is required to use as a test driver, but her reactions were entirely human: just a little bit too late. (If she had perceived the pedestrian and anticipated that she was walking onto the street earlier than the LIDAR did, the accident could also have been avoided.)

But the Uber emergency braking system was not enabled “to reduce the potential for erratic vehicle behavior”. Which is to say that Uber’s LIDAR system, like Tesla’s radar, obviously also suffers from false positives.

Fatalities vs False Positives

In any statistical system, like the classification algorithm running inside self-driving cars, you run the risk of making two distinct types of mistake: detecting a bike and braking when there is none, and failing to detect a bike when one is there. Imagine you’re tuning one of these algorithms to drive on the street. If you set the threshold low for declaring that an object is a bike, you’ll make many of the first type of errors — false positives — and you’ll brake needlessly often. If you make the threshold for “bikiness” higher to reduce the number of false positives, you necessarily increase the risk of missing some actual bikes and make more of the false negative errors, potentially hitting more cyclists or cement barriers.

Source: S. Engleman, in NTSB report

It may seem cold to couch such life-and-death decisions in terms of pure statistics, but the fact is that there is an unavoidable design tradeoff between false positives and false negatives. The designers of self-driving car systems are faced with this tough choice — weighing the everyday driving experience against the incredibly infrequent but much more horrific outcomes of the false negatives. Tesla, when faced with a high false positive rate from the radar, opts to rely more on the computer vision system. Uber, whose LIDAR system apparently generates too-frequent emergency braking maneuvers, turns the system off and puts the load directly on the driver.

And of course, there are hazards posed by an overly high rate of false positives as well: if a car ahead of you inexplicably emergency brakes, you might hit it. The effect of frequent braking is not simply driver inconvenience, but could be an additional cause of accidents. (Indeed, Waymo and GM’s Cruise autonomous vehicles get hit by human drivers more often than average, but that’s another story.) And as self-drivers get better at classification, both of these error rates can decrease, potentially making the tradeoff easier in the future, but there will always be a tradeoff and no error rate will ever be zero.

Without access to the numbers, we can’t really even begin to judge if Tesla’s or Uber’s approaches to the tradeoff are appropriate. Especially because the consequences of false negatives can be fatal and involve people other than the driver, this tradeoff effects everyone and should probably be more transparently discussed. If a company is playing fast and loose with the false negatives rate, drivers and pedestrians will die needlessly, but if they are too strict the car will be undriveable and erratic. Both Tesla and Uber, when faced with this difficult tradeoff, punted: they require a person to watch out for the false negatives, taking the burden off of the machine.

Opening A Ford With A Robot and the De Bruijn Sequence

Mon, 06/18/2018 - 07:01

The Ford Securicode, or the keyless-entry keypad available on all models of Ford cars and trucks, first appeared on the 1980 Thunderbird. Even though it’s most commonly seen on the higher-end models, it is available as an option on the Fiesta S — the cheapest car Ford sells in the US — for $95. Doug DeMuro loves it. It’s also a lock, and that means it’s ready to be exploited. Surely, someone can build a robot to crack this lock. Turns out, it’s pretty easy.

The electronics and mechanical part of this build are pretty simple. An acrylic frame holds five solenoids over the keypad, and this acrylic frame attaches to the car with magnets. There’s a second large protoboard attached to this acrylic frame loaded up with an Arduino, character display, and a ULN2003 to drive the resistors. So far, everything you would expect for a ‘robot’ that will unlock a car via its keypad.

The real trick for this build is making this electronic lockpick fast and easy to use. This project was inspired by [Samy Kamkar]’s OpenSesame attack for garage door openers. In this project, [Samy] didn’t brute force a code the hard way by sending one code after another; (crappy) garage door openers only look at the last n digits sent from the remote, and there’s no penalty for sending the wrong code. In this case, it’s possible to use a De Bruijn sequence to vastly reduce the time it takes to brute force every code. Instead of testing tens of thousands of different codes sequentially, this robot only needs to test 3125, something that should only take a few minutes.

Right now the creator of this project is putting the finishing touches on this Ford-cracking robot. There was a slight bug in the code that was solved by treating the De Bruijn sequence as circular, but now it’s only a matter of time before a 1993 Ford Taurus wagon becomes even more worthless.

Dissecting the Elusive Wax Motor

Mon, 06/18/2018 - 04:01

We’d wager most readers aren’t intimately acquainted with wax motors. In fact, a good deal of you have probably never heard of them, let alone used one in a project. Which isn’t exactly surprising, as they’re very niche and rarely used outside of HVAC systems and some appliances. But they’re fascinating devices, and once you’ve seen how they work, you might just figure out an application for one.

[AvE] recently did a complete teardown on a typical wax motor, going as far as cutting the thing in half to show the inner workings. Now we’ve seen some readers commenting that everyone’s favorite foul-mouthed destroyer of consumer goods has lost his edge, that his newer videos are more about goofing off than anything. Well we can’t necessarily defend his signature linguistic repertoire, but we can confidently say this video does an excellent job of explaining these little-known gadgets.

The short version is that a wax motor, which is really a linear actuator, operates on the principle that wax expands when it melts. If a solid block of wax is placed in a cylinder, it can push on a piston during the phase change from solid to liquid. As the liquid wax resists compression, the wax motor has an exceptionally high output force for such a small device. The downside is, the stroke length is usually rather short: for the one [AvE] demonstrates, it’s on the order of 2 mm.

By turning heat directly into mechanical energy, wax motors are often used to open valves and vents when they’ve reached a specific temperature. The common automotive engine thermostat is a classic example of a wax motor, and they’re commonly found inside of dishwashers as a way to open the soap dispenser at the proper time during the cycle.

This actually isn’t the first time we’ve featured an in-depth look at wax motors, but [AvE] actually cutting this one in half combined with the fact that the video doesn’t look like it was filmed on a 1980’s camera makes it worth revisiting the subject.

Searchable KiCad Component Database Makes Finding Parts A Breeze

Mon, 06/18/2018 - 01:00

KiCad, the open source EDA software, is popular with Hackaday readers and the hardware community as a whole. But it is not immune from the most common bane of EDA tools. Managing your library of symbols and footprints, and finding new ones for components you’re using in your latest design is rarely a pleasant experience. Swooping in to help alleviate your pain, [twitchyliquid64] has created KiCad Database (KCDB). a beautifully simple web-app for searching component footprints.

The database lets you easily search by footprint name with optional parameters like number of pins. Of course it can also search by tag for a bit of flexibility (searching Neopixel returned the footprint shown above). There’s also an indicator for Kicad-official parts which is a nice touch. One of our favourite features is the part viewer, which renders the footprint in your browser, making it easy to instantly see if the part is suitable. AngularJS and material design are at work here, and the main app is written in Go — very trendy.

The database is kindly publicly hosted by [twitchyliquid64] but can easily be run locally on your machine where you can add your own libraries. It takes only one command to add a GitHub repo as a component source, which then gets regularly “ingested”. It’s great how easy it is to add a neat library of footprints you found once, then forget about them, safe in the knowledge that they can easily be found in future in the same place as everything else.

If you can’t find the schematic symbols for the part you’re using, we recently covered a service which uses OCR and computer vision to automatically generate symbols from a datasheet; pretty cool stuff.

Pages