I think I have built this project twice, destroyed about 40 bucks worth of NeoPixels in the process, fell madly in love with a laser cutter enough to consider changing my Facebook status to its complicated and somewhere during all of this, the sweet, sweet sound of leveling up played several times as my soldering skills got better, and better. I also learned, or maybe its remembered, how satisfying making something can be.
TweetHeart is an internet connected sensitive heart shaped lightbox powered by a SparkCore microcontroller that reacts to particular events from the Twitter Streaming API including retweets, unfollows, favorites etc. I wanted to create something that would give a non-traditional notification – perhaps something to show me that I was “getting a little bit of love” even when I was disconnected. Here is a video of it in action:
If you want to check out the code base first – you can find the repository on Github here.
In Love With a Laser Cutter
I started out by creating the box or case for the TweetHeart. Thus begins the story of my romance with the laser cutter – a piece of machinery that I am fortunate to have occasional access to. I knew nothing about laser cutting before starting this project. I had to take some laser training where I learned all about lasers, classifications, dangers, and of course, just how full of awesome they are.
I wanted to make a box that I could easily put together, or take apart – and have an area cut out that would be where I’d place the heart. I started out using this great online tool to make boxes BoxMaker . BoxMaker gives you a PDF file to download which is perfectly set up for a laser cutter, which I then opened in Adobe Illustrator to modify to include the heart cut out. A laser cutter can make two types of cuts – vector or raster. A vector cut means you are cutting all the way through the material – so you are cutting out something. A raster cut is more for engraving the material. When you go to set up a file for a laser cutter, it knows what you want to vector cut by the weight of your line stroke – which needs to be 0.01pt. Perhaps this is not the same for all laser cutters, I was using an Epilog. All lines that have a stroke of that weight will be vector cut, and anything else will be treated as a raster cut.
Once you have your PDF set up, you really just need to send your document to “print” on the laser cutter. This means you have to adjust the settings based on the material you are using. I’ve tried plywood, cardboard, paper, acrylic and slate. Each material requires a different setting for power,speed etc and also depends on the thickness of the material. Most laser cutters seems to have suggested settings for common materials.
I cut out the box from some plywood and then I vector cut that same pixel heart from frosted acrylic to insert into the box. I glued the box together except for the back piece which I wanted to keep loose so that I could get into the box when I had to replace a power source.
In hindsight, I would have cut out a back panel that I would have attached/detached through magnets.
Magnets became my friend during this building process. I’m not sure why I thought magnets would mess with my electronics, but they don’t. Magnets are awesome.
Let there be light
I bought a strip of NeoPixels from Adafruit. Now, I have to be honest. I totally ignored the entire Uberguide for NeoPixels, I was just too excited. Apparently you need to be careful when you power these up, as the initial surge of power, if you don’t use a capacitor, can damage some of the pixels. I got the ones that were encased in a waterproof sleeve, so I went to work to remove that sleeve, and then cut out the rows needed to make a heart. Once I had the rows cut out, I glued them down on an extra plywood heart I cut out, and then I went to work wiring them.
Neopixels are really easy to chain together. You just need to make sure the arrows on the strip are pointing the correct way, and then you just connect the GROUND to the GROUND, 5+ to 5+ and then the input or DIN to the DIN.
I tested the connections as I added each row – approaching this like I would software and testing anything new I added in before going all in. When the row would light up, I’d add the next row. To solder the connections, I’d tin the NeoPixel pads, put a bit of solder on the soldering wand tip, lay down the wire and then just press down. I can’t emphasize how much of a difference a good soldering gun or setup can be. If you plan to spend of time exploring electronics, do yourself a favour and get yourself set up with a good soldering station. I treated myself to this one and can’t recommend it enough. Once I knew the connections were good, I went over the wires and hot glued them to keep them in place. Hot glue – as good as duct tape in most cases.
Bring on the Controller
I then plugged the NeoPixels into a SparkCore, with the Ground going to Ground, 5+ going to the 3V3 pin and the input going to the D2 pin. I chose a SparkCore not only because it was wifi-enabled, but also because it was small and most importantly, supported NeoPixels. Not all microcontrollers support NeoPixels.
In Arduino, there are a few example NeoPixel sketches where you can test your chained strand just by setting the number of NeoPixels you are dealing with. This is also available to some degree in Spark. To work with NeoPixels, you need to either fork their NeoPixel example from their libraries, or you need to create a new application, and then add the NeoPixel library to your sketch. Once I put in the number of pixels I had – which was 93, I flashed the SparkCore and soon my pixels were animating in all colors.
And then there was Delay
Those NeoPixel example sketches are amazing and give you such a great headstart to programming them. But I quickly realized I would have to reprogram how they animated because the example sketches relied on the delay() function – which causes the program to stop processing for the specified amount of time. This is super handy, well, until you need to capture input at any time. And I did. I had to be able to call a function on the device at any time.
I had to avoid using delay for the animations, which meant I had write a bit of code that was more “timer” oriented. Simply, I had to depend on elapsed time, rather than delay, an approach I have outlined in more detail in this blog post. When I was done programming a few animations, I was ready to tie in external input via Twitter.
Now with the hardware, for the most part done, I could work on capturing stuff from Twitter. I was going to write a Node.js script that would use both a module for Twitter and another for the Spark. I used the Twitter Streaming API to capture events from Twitter that I thought might be good to visualize – when someone favorited a tweet of mine, retweeted one of my tweets, or even when someone unfollowed me.
To work with the Twitter API, I chose to use the Twit module as it had all the functionality I needed, and heh, I am lazy and really didn’t feel like comparing the array of twiter based node modules.
The library is very straightforward and requires you to register an application with Twitter and give it the typical authorization token and keys. Once you do that, you are up and running in no time. If you have never done it before, there are a bunch of great tutorials out there including this one.
Every time I captured an event from the Streaming Twitter API, I needed to notify my Spark device. For this I used the official Spark Cloud Library. A few libraries exist to work with the Spark. Now there might be a few ways to use this library or authenticate, but I still find it a bit weird that you pass in your username and password to login, rather than how most other APIs work with authentication tokens. The other piece of information that you will need is the device id, so that you know which Spark device to call or reference in the case that you have multiple registered. I put all that sensitive data in a .env file and took the DotEnv library out for a spin.
Spark Controlling Input and Animations
Now, with the Node.js script being able to callback a function on the Spark device, the device had to handle the animations. I created a simple function called “setMode”, and based on the number passed, it would do a different animation by setting a “mode” variable. That variable is checked all the time in the loop and will start a new animation if the mode has changed. So if someone favorited a tweet, I might run a little animation that looks like a heartbeat, and if someone unfollows, the height might drain out to black repeatedly.
I’m stoked with the finished project. It looks great in the dark, and even though I am only pulling the twitter data, you could tie this into anything – this could light up if someone sends me a text message for example ( perhaps that is next…. ).
Check out all the code in the repository up on Github.