Microservices of the Library of Babel

“Hello and welcome to the Library of Babel, proudly running on Babel Web Services since 2009! I’m Pierre Menard, one of the staff librarians here (I also wrote Don Quixote, but that’s another story…), and I want to demonstrate our latest microservice!

The idea is simple. See that stack of books over there?

Pick out any book you like.

Great! Good choice. You couldn’t go on, but you went on. Now, open to a passage you like that has an accompanying RFID tag.

Awesome! Now, see that funky-looking gadget over there? The black box with a screen and a barcode surface on it, labeled “BWS” on the side?

If you open the box up you’ll get sucked into it and awoken from your alternate reality dream like in Mulholland Drive Go ahead and scan your passage on it and see what happens!

Look at that! You made Samuel Beckett roll over in his grave. Try another!

Capital!

Some books have tags on their covers, because we think that titles are also fair game!

See, even one swapped-out letter can cause a lot of mischief!
Thanks for the visit! Just a head’s up that the exit is a little hard to find here, some might even call it labyrinthine.

MQTT Arduino Pot to Email via IFTTTTTTTTTTTTT

You can see in the video below how the app works:

  • It starts out with a standard soft-touch pot circuit (my fave)
  • Proceeds to view the code with which I take the output of the switch and only post it if the output reaches 255.
  • If/when it does, this is sent to my “email” Adafruit IO data collector.
  • From that data collector, it is sent to IFTTTTT. Here, it sends an email if that data collector gains any new data, and what that data is (255 in my case). (I had to refresh it manually b/c of rate limits)

Video:

 

Height Verification, But For Real

Tinder’s April Fool’s joke this year was a “height verification” feature, that would auto-magically determine a user’s height and prevent them from lying about it on their profile.

Their mockup was light on plausibility – the phone just happened to know your height no matter where and how you held it, not even an attempt to seem legit like having the user take a photo near a landmark or something – but I wonder, in combination with a smartphone and a networked device, it could work.

Maybe the user has to wear a jacket with an RFID chip in the shoulder of a jacket and hold out their arm fully extended with their phone in their hand, and the distance between the shoulder and the phone will be calculated somehow. Maybe a sensor is set at a certain height in a Tinder Pop-up (competing dating app Bumble did something similar, sans networked devices 🙁 ) where users walk by and are auto-magically measured and associated with their account.

A couple of my guy friends are 6’6″, so I’ll get them to be my test subjects if I do try something like this for the final project.

NYC Network Infrastructure??

1.) LaGuardia Southwest Check-In

First, the LaGuardia renovation is all that you hoped for and more – no more “bodega with planes” vibe. But onto devices!

Security at airports bears the brunt of travel-tech criticism, for good reason – the TSA is horrifically inefficient and ineffective, and just the term “millimeter wave scanner” alone sounds like it was the brainchild of some TV-Trope mad scientist trying to come up with the most physics-sounding jumble of words possible.

But let’s talk about the second most source of travel ire: boarding.

First, you have the actual herding-cattle exercise that is lining up to board, which, depending on your airline, can be meh or hell. Southwest’s is the most-least-illogical I’ve seen so far, grouping you into A’s and B’s and C’s and then having you arrange yourselves according to your boarding number in neat segments of fives –

Overall, the boarding process was probably the result of some multi-million dollar consultancy with a lot of random bits of phycology thrown in their for good measure.

But it’s all for naught if the lynchpin network device fails. The humble barcode scanner:

Again, as with the boarding processes I discussed above, shit works. I don’t want to undersell what was and is a Herculean effort on the part of the Southwest engineers who have to maintain the airline’s legacy systems, likely written in some godawful legacy language like FORTRAN in the 80’s.

 But that’s table stakes, not medal-worthy. Let’s talk about the experience a bit: what message are you communicating when, at a checkout, you scan not your products, but your customers? Hopefully in the near future Amazon will bring their checkout-less technology to airports and we can just waltz on planes like humans instead of cattle.

2.) 33 Thomas Street

This is a little meta, but consider the strange, windowless building in lower Manhattan:

It seems (if you look at actual pics and not my shitty drawing) to be the opposite of a “networked” building – no windows, and the Brutalist architecture makes it seem less like a building where things happen and more like a 29-story rock that just landed there.

But, more than any one place in the city, I think it has the title of “most networky” place in Manhattan.

The building handles routing for AT&T’s long-distance phone network, and manages a lot of other communications data. A power failure in the building in 1991 interrupted nearly 10 million phone calls and pretty much all air traffic control at 400 of America’s airports. The NSA allegedly (thanks Snowden) monitors the communications of the UN, the World Bank, and forty or fifty countries from this building. It’s amazing to me, considering the building’s architecture, how self-effacing it is about it’s purpose: What is physically the most closed-off building in the city is in fact the most networked.

3.) Rebecca Minkoff Dressing-Room “Smart Mirrors”

Think about the experience of trying clothes on in a dressing room at a store (us gals may find this more of a problem then men, judging by the studies on gender and fashion retail but w/e). You’re trying on what looks like a perfect pair of pants…but they’re too big. In a normal shopping experience, you’d have to take those off, put your clothes back on, and scramble around the store looking for a sales associate to help you find a different size.

Not here. Rebecca Minkoff’s stores are outfitted with smart touchscreen mirrors in their dressing rooms.

Need another size? Order it via the screen, and they’ll appear outside your fitting-room door in a few minutes. Want accessory recommendations for the outfits you have with you? The system can do that too. When it’s time to check out, you can use the interface too, thanks to RFID chips in the clothes.

The numbers show this is working economically: customers who use the experience purchase 30-40% more than the average customer. And it’s hard to understate another point about this: they beat Amazon to self-checkout. In the fashion industry, which is about as legacy and non-innovative on the whole as airlines are (when it comes to customer-centric approaches, at least).

Sand Hill Scramble! (Scratchoff Ticket Interface)

Will your startup reach Unicorn Status ($1bn Valuation)? Find out now by playing Silicon Valley Scramble!!

The microcotnroller and soft-touch pot:

This was fairly straightforward to build (although the soft-touch switch was a little tricky to connect to the breadboard – I ended up buying a few female-to-male pre-soldered wires because the 3 male connections on the switch weren’t long enough to connect/stay connected to the breadboard). The middle pin is connected to the analog pin on the Arduino (via the purple cable), the pin to the right of it is wired up to ground, and the left-of-center pin is connected to power via two resistors.

Analog input to webpage:

You can see my switch half-works here. I tried several ways of mapping the analog switch values so that sliding around on the switch would actually increase the output value in the expected manner, but no matter what I tried it would only every be a binary “no touch = ~3-4, any touch = 255”.  I also tried treating other output pins on the soft-touch switch as the output pin, just in case I was reading the docs from the swtichmaker wrong, but the other pins wouldn’t give any sort of dynamic output at all.

I took this binary on-off output and hooked it up to a webpage in the Arduino IDE, and set the webpage to reload every second – you can see the complete code in this video:

Here is the digital portion of the interface – built in Framer (https://www.framer.com). In the video below you can first see the graphics I designed, the code I wrote (in brief: it 1.) picks a random number out of 0-4 that determines which screen will display when you get to your results, and 2.) dials back the opacity of the “gold scratch-off” layer in accordance with how long you press down. I had also wanted some gold shavings to fly everywhere as you scratched, but some of the animations for this were giving me difficulty and didn’t make the final cut).

As I’ll show below, I couldn’t not get Arduino and Framer talking to each other via an API call. However, if I had, I would have been able to take the analog input from the soft-touch switch and set something like “if analog_input ==255, start opacity fade”. If the switch had worked as intended, I would’ve been able to add an even greater level of realism, and possibly track the switch’s position directly within Framer to, say, move a finger directly in accordance with where your actual finger was on the soft-touch pot. Someone on the inter webs accomplished this (somehow) via Arduino -> MIDI (offline, on your mac) -> Framer (https://www.youtube.com/watch?v=Sik8Ppnegmo&feature=youtu.be). But after downloading the necessary library I couldn’t get his approach to work, and it wasn’t using the methods we were covering in the class (over networks/REST calls).

 

In the video below you can see how Framer would normally make API calls and receive JSON. Here I use a Framer function to read some dummy JSON from a dummy endpoint, and print it to the console of my interface.

Knowing that was working, I tried to use the URL of my Arduino webpage to get the JSON data and analog switch value, but I kept getting a 404 error.

Facial Recognition Switch

I ordered my components for class, but unfortunately I do not have any wires or other equipment lying around my apartment, and, living north of the Washington Square campus, I couldn’t find time between work and school to make the trip down to Tandon to get components to make an electrical switch. Instead, I offer up a slightly different answer to the assignment: a story of a handless “switch” hack in the form of a facial-recognition device I made a while ago.


This story requires some background: I was in a Stern class being taught by a professor who was a notable VC in tech. Now in that class, we were required to make ourselves name tags (as the class was 50+ people). Most students made simple little name tags of the paper-tent type. But, wanting to impress this professor with the hopes of either funding for whatever startup idea I was occupied with at the moment or a job at his company, I decided to get creative.

The concept was simple: equip a Raspberry Pi with a camera, that would recognize faces and print out a name tag (with a photo, full name, and title) for everyone it saw. The execution was a little more tricky. 
Getting the facial recognition program to run on the Pi wasn’t altogether too difficult. The actual facial recognition component I used was AWS’s Rekognition, which is astoundingly good. After that, a project I found on Github mostly got my Pi’s camera and Rekognition talking to one another without me having to do much work there. I trained my network to recognize photos of myself, my professor, and the handful of students that sat around me in class, and tagged these photos with details like their names. My Pi recognized faces really well at this point, so it was time for the printing component. 

The cheapest, most compact printer I could find on Amazon was a handheld Chinese point-of-sale receipt printer. This “POS” was truly a piece of shit, and I spent most of my time getting this little guy to work. This cheap and shitty thing required it’s own fucking driver (in the form of a mini-Compact Disk!) to be able to accept my Raspberry Pi’s input. Not having a CD reader in my apartment, I hoofed it down to the Staples off of Union Square that’s open 24 hours at 12:30am to buy one. This printer was also limited to printing only ASCII characters, so I had to convert my images to ASCII art.

An all-nighter later, the POS <-> Pi communication bugs were mostly solved, and I managed to get my little gadget working remarkably well!


 I even managed to get it working on non-human faces, like this stuffed-animal cat I have.

For the purposes of this class, I don’t have a working model of this anymore (was cannibalized for parts soon after demoing it), and my documentation isn’t the most comprehensive. But I did get a job as a developer at my professor’s because of that build, so I’ll let that speak for its quality. And it was a fully-handless “switch” – it wouldn’t be hard to mount the Pi & camera on a door and have it operate a servo motor, effetely becoming a facial-recognition lock.

(Nowdays, I’m a little more swanky in the facial-recognition department and own an AWS DeepLens, which lets me directly hook up to AWS’s crazy amount incredible IoT offerings. Hoping to get to use this in class a bit in the future!)