Jamie Charry // ITP

Confused since 1989




What I’m super curious to detangle using traceroute is how connected is the internet, really?  Seems like a pretty silly question, since obviously every page on the internet is theoretically linked to every other address, even if that connection is extremely distant.  But what I want to discover is to what degree is infrastructure shared?  For example, how much in common does the path from my computer to amazon.com have with the path to, say, reddit.com?  The assumption would be that the outgoing message will be pretty consistent, as the path from my router to a large enough parent network would be pretty consistent.  Taking it further, I’d assume that the next step would inevitably a Tier 1 service provider, maybe several, followed by a path down some more obscure IP’s, and finally to the destination.  But, I suppose it’s possible that Verizon, my service provider, has such a strong hold on my outgoing traffic that it might dominate most of the infrastructure, only passing off to other providers towards the end of the journey.  So let’s find out.



I started, as all good projects do, by slapping together a really rudimentary chrome extension to capture the hostnames of the sites I visited over the course of a few hours.  Sure, I probably could have just pulled my browsing history, but then I wouldn’t have an excuse to learn more about how to build chrome extensions.  Here’s the history I gathered from my hacky extension:


All in all, it’s almost 70 hosts. Each of which, should, be unique.
After that, using a bash script to run traceroute, I was able to save the traces for each hostname.

Once I had all the traces, just looking at them manually starts to form an interesting picture.

Let’s look at two side by side
On the left is amazon.com and the right, goodreads.com. Immediately it’s clear that the path from my router hits some Verizon servers, hit something with the name alter.net, which after some investigation seems to be Verizon owned hardware, then in both cases, hits amazons servers. My guess is that goodreads.com is hosted on amazon. Interesting. Also, the trace to amazon.com itself starts timing out after hitting some amazon servers. My guess here is that we’re hitting higher level amazon servers, those used for directing traffic, but as soon as we get directed to more specific servers, those used for hosting their website, for example, they probably have firewalls set up to add some protection to their actual content. It could explain why every attempt to trace anything after the 13th pass timed out.

Let’s look at one more pair:
Weruva.com vs Seamless.com. Food for cats and food for humans.
Seamless travels through Verizon servers to a few Comcast servers, then starts timing out completely. Nothing to see. Weruva at least finished it’s trace, and went through Verizon, either headed cross country to Washington (or just went down to DC) to hit Level 3, then through a small company called XLhost, where Weruva.com is presumably hosted.

Taking this information, I used D3.js to make a simple visualization of the paths traveled. I first used a python script to parse the data from the txt files into a json file containing the trace for each site. Then, once loaded in D3, I could manipulate the data. In the visualization as seen here (the link above goes to a dynamic version), the circles represent ip addresses of a specific server, while the red circles indicate a terminal node. Now, the terminal nodes don’t always mean that the trace found it’s way to it’s end destination. Many of the sites timed out and were unable to resolve the full trace, so in that case a red circle would indicate an incomplete trace. Let’s look for some patterns.

It would seem that there are more unique IP’s encountered between steps 6 and 12 than anywhere else. Naively this might imply that there are many paths to travel to a designated endpoint, but more likely it’s due to the fact that many sites actually resolve completely with that number of steps. Meaning there are fewer cases where we ever even need to go past that. Another caveat is that a lot of traceroutes got lost around this area, meaning they didn’t fully resolve the address, but they encountered firewalls, or some other reason for not finding the ip, for the rest of the trace. It’s also appears to be fairly common for a trace to hit many very similar IP’s right next to each other. Looking at this trace:
it seems like it lingers on several Ip’s, i.e. 54.239.x.x or 52.93.4.x for a while. I’m not sure exactly what’s going on here, but it might be an artefact of the way traceroute works rather than demonstrating communication with all those unique IP’s. As it turns out, all those IP’s belong to Amazon, so the path is either bouncing it’s way around Amazon’s servers for some reason, or something’s happening with traceroute where it’s unable to tell us the exact path traveled. Either way, I don’t yet understand enough about the internet to fully grasp what’s happening here.



Coincidentally, I had been thinking about maps for a while when it came time to think about fleshing out an idea for this project.  So, the idea really started with a picture of a map in my head.  That naturally led to the idea of ‘place poems’.  Basically, could I capture the essence of a geographical location in a computer generated poem?  What would that even look like?  Who knows.  But I did know that if I was going to create poems based on geographical locations, I’d need to gather data.  As much of it as I could.

First steps – a map

Before even starting thinking about text, I got a map up and running using leaflet.  Turns out it’s really easy to get a simple map working.  Just a few lines of code, really.

gives pretty much this:

Screen Shot 2016-05-05 at 7.36.38 PM

With a bit of extra styling, I could get the map full screen and add some title text. The great thing about working with leaflet was that it handles basically everything – clicks, panning, zooming, loading map tiles.

I set up the skeleton of a Flask app to handle the incoming location data.  I’d use this data to query API’s to try to gather location based information.

Getting warmer – API’s

With the map sending data to a flask application running server side, it was time to dig into API’s.  API’s are a pain mainly because they are all different.  They all have different methods of interacting with them, and they all give responses in different structures.  So it was really tedious to work through getting data from a bunch of API’s, but alas, I did as much as I could.

Ultimately I gathered data from the following sources

New York Public Library Digital Collections -> which I downloaded images from, then piped those images to ->

Google Cloud Vision API to get keywords from the images.

NYTimes article search API -> to gather news articles related to the name of the location

UNData -> to gather some specific data on a country

Open Weather Map -> to get realtime weather information

Factual -> to get landmarks (i.e. restaurants, etc) based on coordinates

Wikipedia -> grab pages related to place

Oh, and on the front end I used Google’s reverse geocoding API to convert coordinates obtained from clicking the map to an address that I could break down into city, country, etc.

I ended up creating a class (a really, really messy class) called api_requests, that would hide all of the sloppy API code, and from my main flask file, I could query all the API’s, as well as access their results simply like this:

Another module, called compose is responsible for taking all the data gathered from the API’s, and arranging it into a paragraph of text.  The nice thing about creating a class for the api requests was that I could save all the results to properties on the class and keep all that information self contained and easily accessible.

To make my life a bit easier, Api_Requests has a property dev that I can flip to True to load a local  file so I didn’t have to constantly call the API’s.  Within Api_Requests I also used the shelve module to save google cloud vision results to NYPL images, so that if an image came up again, it wouldn’t have to download the image from NYPL or go to google to get keywords.  Eventually, this database will grow and grow so that the program will barely ever have to go to NYPL or google cloud vision, as it’s creating it’s own database.  Shelve also came in handy to save UN Data, since for some odd reason it return 503’s seemingly randomly.  The same request will sometimes work perfectly, but then suddenly return a 503.  No idea why, so I started cataloging data as it’s downloaded and only go back to the UN Api when the data isn’t already present.

As this class is super messy, I won’t post it here, but it’s all viewable here on github

Poem Time!

Now the hard part.  I had all this data, but what do I do with it?  A short poem?

I first tried constructing short, poem like lines inputting random data grabbed from all the API information I had.  So put a random article headline here, a random wiki line there, make up some lines that I could put the weather info into, etc.Screen Shot 2016-04-29 at 6.02.07 PM

Not so great.  I was clipping the article lines and wiki lines so they were nice and short.  Otherwise it felt even weirder having alternating short and long lines.  But if you look through this poem you can see what kind of data I’m pulling.  map, document, area are keywords generated by running the NYPL photos through Google cloud vision.  The weather data is mapped to keywords based on wind speed, temperature, and weather status (i.e. clear, rainy, etc).  46 social’s refers to the catagories of places that are immediately near where the user clicked.  Socials refers to social type places, I guess.  Then there’s the == stuff, which is how Wikipedia outputs headers to sections when pulling entire page content.  So, a decent first attempt, but I wasn’t satisfied.

That’s when I learned about Tracery.

Rethink that whole poem thing

I was thinking about poems too literally.  So much of what we talked about this semester is that a poem doesn’t have to be confined to the strict limits that traditionally define it’s structure.  The very nature of writing poems with a computer means that poetry is malleable, and thus I shouldn’t bind myself to recreating a seemingly traditional poetic form.  So I decided to ditch the poem form.  In thinking about what would make sense with a location, well, what came to mind was a wikipedia style opening paragraph mashed up with some travel advice.

Using Tracery, I constructed a very convoluted structure:

The origin of the structure is set based on the current weather conditions.  If it’s raining, it’ll print out one sentence structure, if clear, another, etc.  Eventually I’d like to have many of these structures.  The problem with Tracery is that I still end up structuring the sentences.  In effect, I’m still authoring these summaries.  Sure there’s an element of randomness, but it’s not exactly what I envisioned.  What I’d ideally want to strive for is a completely hands-free generative form.  I could have used a Markov chain for this, but it would be very difficult to insert the weather information and data I gathered using a Markov chain.  So for now the tracery structure works just fine.

Finishing Touches

Lastly, I had to hook up the save poem button so that the poems were actually saved.  I added a route to the my python app:

I used uuid to generate random strings and shelve save the poem using the randomly generated string as a key. The key would also act as a url parameter, so that a user could pull up that poem again if they felt so inclined.

A Few Examples

Screen Shot 2016-05-03 at 11.44.57 PM Screen Shot 2016-05-03 at 11.38.50 PM Screen Shot 2016-05-02 at 1.28.01 AM




Brass, Walnut, and Cherry Lamp

Hero shot




I mocked up a rough version of the lamp I wanted to make in Vectorworks.  I planned on doing a lot of the work by hand, so it wasn’t so much to get realistic dimensions but more just to get a feel for how it would all go together.

Screen Shot 2016-05-02 at 2.25.06 PM

I wanted the base to have an organic shape to it, rather than just be a flat disc.

Screen Shot 2016-05-02 at 2.26.39 PM

Earlier in the semester I laminated 3 pieces of 1/2″ hardwood (cherry, walnut, cherry) into a 1.5″ slab.  Perfect for the lamp.  This laminate feels like something out of the 70’s.  It would feel at home next to a bright orange sofa.

Previewing the 4-axis cut.  It’s finishing with a rounded bit, so it looks like it’s going to leave a bit of a lip around the edge, but since planned on finishing it on the lathe, no big deal.  Also, I didn’t catch this, but the side wall is completely gone.  I figured the software would leave a bit of material along the side walls for the support to latch onto, but I guess the design was too close to the edge.  I didn’t realize this until it started cutting, but again, since I was going to finish this on the lathe, I didn’t have to worry about this issue.
IMG_4416 IMG_4417Notice how the side walls are totally gone.  I’ll stop the job before it cuts too deep, as long as I get the main shape of the base I’ll be fine.

After the fact, I realized I needed a way to chuck up the part in the lathe, so without removing the part or changing the home position of the 4-axis, I modified the part file to have a 2.5″ diameter recess where I could fit the chuck.  Worked beautifully.

After I got the main shape and the recess into the part, I roughly cut it out on the band saw.IMG_4427

Then chucked it up and cleaned up the profile.IMG_4428 IMG_4434

Now I needed to make the little elbow piece that would allow me to have a tube run perpendicular from the main lamp stem.  I had more of that laminated block, so I cut off a small chunk to turn.

I needed a way to chuck it up, so I glued on some scrap plywood and let it set overnight. I glued up two, just incase I destroyed one, which was highly likely.IMG_4451 IMG_4453

I chopped off the corners on the band saw, chucked it backwards, then turned down the plywood so it would fit into the chunk’s inner jaws.IMG_4454

Flipped, and started turning the piece round.IMG_4455 IMG_4457

I needed a 1/2″ hole to fit over the brass tubing I had.IMG_4458

I also needed a hole in the side for another brass tube, but as I was drilling on the lathe, the plywood in the chuck snapped.  Turns out it doesn’t like sideways pressure.
IMG_4460 IMG_4461

So I took the part to the drill press and finished up the hole in the side.IMG_4462

I wanted to make sure that both the base and the elbow piece worked as I hoped.  There’s a little brass stopper held in place with a set screw on the main stem to prevent the elbow piece from sliding down.
IMG_4465 IMG_4466

Obviously, if I want to wire up a light bulb in the side swing arm, I need to route wires through, which meant I needed a channel in the main stem.  With a hack saw and a nibbler, I was able to cut a pretty decent channel in both the outer brass tube and the inner threaded rod.
IMG_4467 IMG_4468

It was time to apply some finish to the base and elbow.  You can see some burn marks on the elbow, and I could have sanded it off, but in the moment I kind of liked the way it looked, so I kept it.  Only later did I wish I had sanded it off completely.  But it ultimately looks fine.

A simple coat of tung oil really makes a world of difference

I was using white porcelain sockets, which didn’t look great with the wood and brass, so I decided to make some socket cups out a of block of cherry I had.  It was a 2″ x 6″ x 6″ bowl blank that I cut into 2″ square stock.  Took it to the lathe and started turning it round.  Initially I tried spindle turning a tenon onto one end, which would fit into the inner jaws of the chuck.


Turns out, this is a pretty easy thing to do – snapping parts that are chucked into the small jaws.  The tenon’s just aren’t strong enough, I guess.IMG_4478

So I took a different approach, and just chucked up the square stock.  It worked much better.IMG_4492

An issue I couldn’t quite figure out was how to turn square stock round without chipping the hell out of it.  I’m sure there’ s a technique but as this is my first time wood turning, I wasn’t sure the best way to to do this cleanly.  It’s not a huge deal, but it did cause one of my cups to have a chip in it, which was kind of a bummer.IMG_4491

Started to turn the hole for the cup.


As I got deeper into the part, it became harder and harder to see the tool and what I was doing, so I had to get a feel for ‘turning in the dark’.

I got the hang of it.  And after a while the socket fit pretty nicely.IMG_4497IMG_4486

I made two cups, and sanded them to the same size.IMG_4501

Time to wax everything.

Lastly, I needed a channel in the base to allow the wire to run through.  I was terrified of doing this on the router table as I really didn’t want to mess up this part.  It chipped a tiny bit, which I’m pretty unhappy about, but it’s not very visible, so it’s alright.IMG_4502

With all the parts done, time to wire and assemble.IMG_4505 IMG_4506

And there you have it! My cherry, walnut, and brass lamp
IMG_9899 IMG_9902 IMG_9905 IMG_9906 IMG_9908

See or Tell


See or tell is a “getting to know you” app that aims bring two anxious daters closer together by exposing the personal content on their phones.


Phones are more and more intrusive these days.  It’s hard to go an entire meal without someone checking their phone for facebook posts, or texts, or the latest instagram photos to fill their feeds.  At the same time as we parade these things around in public, many of us fear what would happen if our phones got into the wrong hands.  I’ve known several people who react quite intensely when I pick up and start using their phones.  In fact, at this point, holding someone else’s phone feels foreign.  The content on their phone might be vastly different from the content on my phone.  The catch is that much of that content is considered private.   When we take photos, often we make the implicit assumption that unless we explicity show that photo to someone (or post it somewhere), it will only ever be seen by us.  That morning selfie of your bedhead.  The shots you take to see if that pimple is receding.   The videos of you playing Hey Ho poorly on the guitar.  They’re all there, and they’re all hidden from the world.

But what if they weren’t?


The motivation left the app a bit wide open.  What information did we want to expose?  How did we want to frame it?  And technically how would we do it?  Well, turns out the first two questions become moot once we started playing with some technology.

First attempt – Voice Recognition

Google’s WebSpeech API looked promising for recording conversations.  Our initial idea was to record, and when the app picked up a keyword, it would intrude into the conversation, offering information about that keyword.  It wasn’t quite in line with sharing secret information, but we wanted to tie it into personal data.  So if I said “mom”, the app would read my last text message from my mom out loud.

Turns out the WebSpeech API doesn’t’ work on webkit! Which meant we were dead in the water with this idea, because after a full day of searching, I couldn’t find anything that would fit our needs.  So, time to scrap idea #1.

Second attempt – People Search

Our next idea was to gather information about each of our users by scraping the web for information.  It sounds relatively easy, but in practice there’s no good way to do this.  Unless you want to ask your users to log into a bunch of different accounts, much of the information on the web is actually pretty private.  Facebook profiles in particular are pretty hard to get at unless you log in and happen to be friends with the other person.  There’s twitter, sure, and instagram, but that information is already so public that it doesn’t really get at our point.

I looked into people search API’s and found a few sketch examples – Pipl and Full Contact.  Pipl has a demo where you can enter a person’s name, and it spits back some basic information – date of birth, jobs (looks like it was pulled from linkedin), education (again probalby linkedin), addresses (this info exists on whitepages), what look to be twitter followers.  Not bad, but not free.  Full contact was no better.  So, it was starting to look like people searching was out, too

Third (and final) attempt – Photo Sharing

So after scraping ideas 1 and 2, we landed on sort of a game, where users could choose to answer a personal question, or share some private information from their phone.  This again proved problematic since iOS doesn’t allow apps access to the root file system, thus most private information is actually off limits to apps.  This makes sense as a security measure.  You don’t necessarily want any random app having full access to your photos, or your text messages.  Apple provides hooks in which you can access some of that data, but it requires direct action from the user.  For example, for the app to access a photo, the user actually has to select a photo from the iOS photo selector screen.  Which meant randomly grabbing random photos from the user’s camera roll was out.  So were text messages.  It’s possible to access someone’s private calendars, but that was less exciting to us than photos.  So for the sake of building something, we decided to go with a less impactful model of our game See or Tell.

The Build

The underlying architecture is fairly simple.  Two phone’s connect to a socket server, and enter a specific room.  Once in the room, the server will keep track of the game state and send messages to the clients when they need to update.  This keeps both phones constantly in sync with one another.


Alice and Bob are on a date.  They both open the app and enter room Qwer.  They are greeted with level 1 questions.  Bob ask’s Alice to show him her last google search.  Alice, knowing her last search was ‘what do bed bugs look like’, decided it might be a bit radical to share that on a first date, with someone she just met.  So she chooses “I won’t answer”.


Alice is then prompted to select a photo from her phone and share it with Bob.

They both wait a spell while the photo is sent between phones.


And when the photo is ready, it pops up on Bob’s phone.  Who can then close it and continue the game.  After this round, the questions flop and it becomes Alice’s turn to ask Bob a question.


And that’s it!

Looking forward

Obviously this app is not where we want it.  We had much grander ideas about sharing personal data, but ran into some annoying technical limitations that causes us to pivot quite a few times.  But even with this format, there are ways to make it more impactful.

  1. After a user shares a photo, the other user can decide if they accept that photo as ‘personal’ enough.  If, for example, Alice shared a photo of her cat, Bob could decide that the photo didn’t measure up to the level of intimacy as the question would have brought about, and ask Alice to either go back and answer the question, or share a more personal photo.
  2. The same can be done for when the user decides to answer a question.
  3. Add more data to share – i.e. calendar data, access metadata on photos, geolocation, steps, health information from healthkit.  When the user doesn’t want to answer, their phone will select a random type of data to share.
  4. UI and UX development.  Right now it’s two screens and a few buttons and p tags.  And simple is good, but it needs some design to spruce things up.
  5. Is there a better way to share a photo than sending an entire base64 string to the server, then back to the second client?

4-axis rd. 2

I’m thinking of making a lamp – and with the 4-axis, now I have the opportunity to make a custom base for it.  I had a piece of 1/2″ thick walnut that would make a great testing ground.  Ideally after getting the part just the way I want it, I’d like to use a thicker piece, but for now this is a great test.

I made a simple curved profile, and swept it around to create a natural looking curved base.  A 1/2″ brass tube will sit in a notch in the center.

Screen Shot 2016-04-13 at 2.52.13 PM

Loaded up into SRP Player and added tabs.  I wasn’t too worried about these tabs as I could just sand them off afterwards.

After ensuring my job wouldn’t be crazy long, I got my part dimensions.IMG_4341IMG_4339

Then cut a piece of 1/2″ walnut to size


Found the center to sit in the 4-axis…

Loaded up my bit.

And started milling.  I used two bits, a 1/4″ square for roughing, and a 1/4″ rounded bit for finishing to get really smooth curves. IMG_4347 IMG_4352 IMG_4354

I took it to the band saw and took off the tabs.  Then sanded the tabs down.

It’s not perfect, I need to make the center hole a tiny bit wider to actually fit the 1/2″ pipe.  A 1/2″ peg does not fit into a 1/2″ hole.  I should’ve known that, but I overlooked it.   I also need to make the curve a little gentler so there’s more material surrounding the tube.  On this iteration there wasn’t enough material there, so the tiny lip left behind got a bit chewed up.IMG_4359


AOAC Final Concept

Rationale and concept

I’ll be working with Dalit on a final app – and we’ve decided to take a critical look at how mobile technology inserts itself into our lives by making a satirical ‘conversation helper’ app.  It’s based on the assumption that as computers get better at decoding what we mean, they’ll become more aggressive in their attempts to grab our attention.  If my computer knows I’m thinking about buying a new coat, it will start pointing out when I walk past a store that has coats on sale.  It will inject itself into my world.  Phones already do this to a large extent, but what happens when we start relying on our phones to help us through a conversation?  What if we get to a point where we need our phones listening to us and guiding us through our daily encounters?

So that’s where we’re starting.  Imagine we’re living in a world when people need to be told what to talk about, or to stop at that store because that coat they want is on sale.  Without being explicitly told by their phone, they are lost.  Our app will make commentary on this idea.

So what will the app actually do?

It will listen to a conversation between two people.  As it captures audio, it will analyze it for keywords that it knows (which we’ll probably pre-program for the sake of prototyping).  When it hears a word that it has in it’s database, it’ll chime in to the conversation, without being prompted, and start reciting some information about whatever it heard.  For example, if it hears the keyword ‘shoes’, it’ll pop up an add for a sale going on right now at a nearby shoe store.  If it hears the name of a famous actor, it’ll pull up their IMDB page, and recite some of their most recent movies, or news stories involving that actor.  If it hears the word ‘politics’ or the name of a presidential candidate, it’ll pull up some stats about that candidate, or some information about what’s happening in politics in the news right now.

Inspiration for our digital assistant

4 Axis Mill Skill Builder

I really had no idea what to make.  For whatever reason I felt stumped this week.  So I made a really simple letter ‘J’ using the 4-axis mill.

Screen Shot 2016-04-06 at 11.05.24 AMI cut a piece of scrap plywood to size and used an 1/8″ flat bit.  The setup was smooth.  I added some tabs into the model in SRPlayer, knowing that I could just cut them or file them off later, since it’s just plywood.  I made the part nice and small so that it would be a quick job for my first time using this machine.


Homework 4 – Modules and Functions

For this assignment, I modularized my Midterm assignment and put the relevant chunks of code into functions within a module.  The simplest thing to do was to just wrap stuff up in functions, but the value of functions is make code both reusable and flexible.  After looking at my code for a while, the only real flexibility I could add was to allow for substituting different kinds of word substitution – i.e. synonyms, or antonyms, or hyponyms, etc.

So my code went from being pretty bloated, to fundamentally coming down to one line, that is called like so:

from synonymize import synonymize

poem = synonymize('I am a dummy line of text', 'synonym')

Meaning I can pass any line of text into it as well as the type of substitution, and I’ll get back my poem in a list.  Since it’s a list, there are a few extra lines that handle printing out the lines nicely, but the heart of the program is now extremely simple.  The module, however, is pretty messy, and looks like this:

from random import choice
from wordnik import *
apiUrl = 'http://api.wordnik.com/v4'
apiKey = '********' 
client = swagger.ApiClient(apiKey, apiUrl)
wordApi = WordApi.WordApi(client)

# Get synonyms for every word from wordnik
# Pass a list of words, return dictionary of synonyms for each word
# using wordnik API
# acceptable relationship types:
#   synonym, antonym, hyponym, some others
def relationship_dict(word_list, relationship='synonym'):
    synonyms = {}
    if len(word_list) == 0:
        return {}
    for word in word_list:
        syn = wordApi.getRelatedWords(word, relationshipTypes=relationship, useCanonical='true')
            # Save list of synonyms into dict with words as the key
            synonyms[word] = syn[0].words
        except TypeError: 
            # If no synonyms were found, save in an empty list
            synonyms[word] = []

    return synonyms

# Break an array of words into the structure
# of the poem - i.e. each successive line
# has one ore word
def create_structure(word_array):
    structured_list = []
    for i in range(len(word_array)):
    return structured_list

# Line constructor - give it a line, 
# and a dictionary of words to substitute, 
# and it'll randomly replace
def construct_line(line, word_subs):
    word_counter = 0
    for word in line:
            rand_sub = choice(word_subs[word])
            # if there's no substitute, just return the same word
            rand_sub = word
        line[word_counter] = rand_sub
        word_counter += 1

    return line

# Master function - creates the poem
def synonymize(line, relationship='synonym'):
    word_array = line.split()
    # Create structure of poem
    all_lines = create_structure(word_array)

    # Create relationship words dictionary
    relationshipDict = relationship_dict(word_array, relationship)

    # Construct lines, one at a time
    line_counter = 0
    for l in all_lines:
        # print l
        l = construct_line(l, relationshipDict)
        # print l
        all_lines[line_counter] = construct_line(l, relationshipDict)
        line_counter += 1

    return all_lines    

The module provides functionality to create a dictionary of related words given a string of words and a substituion type. A function to generate the structure of the poem – i.e. each new line has one more word. And a function to do substitution on a single line. These are all then called within a master function, synonymize() which spits out the poem.

The output is still just as terrible….


Adventurous dogs amaze me. This morning my dog yelped because he unexpectedly stepped on a shipping bag. =\
        foolhardy qualifier
            restless andiron perplex
                foolhardy dude astonish mine
                    hazardous bloke perplex my This
                        dangerous chap perplex I This morningtide
                            hazardous man amazement my ass This morn my
                                dangerous andiron astound us This morrow mine andiron
                                    daring fellow bewilder I This forenoon my dude boast
                                        hazardous chap confound my ass This morrow mine dude yaup forwhy
                                            foolhardy fellow astonish myself This morningtide mine dude yaup forwhy it
                                                dangerous wretch amazement myself This forenoon mine man yaup ask they unexpectedly
                                                    hazardous firedog perplex myself This morrow mine fellow yaup ask man unexpectedly stepped
                                                        dangerous firedog perplex my ass This forenoon my fellow boast inasmuch as his ass unexpectedly stepped concerning
                                                            hazardous dude bewilder myself This morningtide mine man yelped since s/he unexpectedly stepped concerning ongoing
                                                                enterprising man surprise us This morrow mine man boast ask qualifier unexpectedly stepped onward adhering ships
                                                                    hazardous chap bewilder us This morningtide mine firedog yaup for informal unexpectedly stepped about of navigation sack
                                                                        restless guy confound my This morrow mine andiron boast long her ass unexpectedly stepped of a navigation swell invalid

Can technology feed us?

Once wholly faithful in the ability of technology to solve all of what ails our society, today I’ve become more cynical, more skeptical.  If my teeth fall out because I didn’t take good enough care of them?  Technology can fix that.  If we spill oil in the ocean?  Let’s just clean it up with bacteria that eat oil.  That way when it happens again, we’ll be more prepared.   The reason I’m skeptical these days is that technology promises to let us off the hook.  It promises to fix what we’ve broken.  And while that’s not necessarily a bad thing, it’s a dangerous mind set to get into.  If extrapolated, it may imply that we can destroy our world, because technology will fix it.  A good analogy here is recycling.  Our faith in recycling eases our conscious.  We are led to believe, whether on purpose or not, that recycling is a lossless cycle.  But recylcing is extremely expensive and energy intensive.  And is not sustainable indefinitely.  But if we can develop better and better recycling plants, it means we don’t have to worry about our level of consumption because we can just recycle everything!  It’s backwards thinking, in my opinion.  The goal should be how to use what we have efficiently, not use whatever we want then rely on technology to fix our bad habits.

With regards to food, we’ll need technology going forward if for no other reason than we need to undo all the damage we’ve caused.  Soil erosion, runoff, groundwater pollution, emissions.  Many of these problems are too deep to fix without some creative technological solutions.  I’m just worried that if we manage to mitigate a lot of these problems with technology, will we forget the lessons of the past?  Will we become of species waiting for the next technological revolution?  Will we reach a state where consumerism will be entirely sustainable and our food will be nutritious, cheap, and accessible to all?

I was irked by this line in particular: “Then there is a whole different group of highly technical people who are building robotics for the field, sensorbased technology, automated watering systems, new food-packaging technologies, and big-data-related inventory control to reduce waste.” These, he says, are “the people who are going to solve the big problems.”  The implication here is that the consumer is powerless.  We must rely on technology to solve the big problems.  It paints the general population as powerless.  It even insinuates that people who care deeply, but are not in the world of high tech are well meaning, but ultimately useless.

Technology is incredibly important.  I don’t mean to shun it or fear it.  At this point, we need it.  But it cannot be our savior.

The Legislator

Poking around the NYTimes API, I noticed they had information on recently introduced bills to congress.  Personally, I think it’s hilarious and amazing that legitimate bills being brought to congress consist of things like A bill to establish the composition known as America the Beautiful as the national anthem side by side with Authorization for the Use of Military Force against the Islamic State of Iraq and the Levant.  These are both real bills, going through the senate at roughly the same time.  The language used for titling bills is pretty interesting.  Bold action mixes with the mundane in bill titles like Robocall Enforcement Improvements Act of 2014.  Some titles are simply undecipherable unless you’re in the know: SCRUB Act of 2014.

So I thought it’d be fun to try to take actual language used in these bills, mix them around, and come up with new bill names.  But I didn’t just want to randomly grab words out of a hat, so I tried to give some structure to the sentences.  This meant identifying the parts of speech of each word from each bill title.  Once the words were properly catagorized into their parts of speech, I could create a structure for my python based congress-person to spit out bill title after bill title.

Backing up a bit, the first step was to actually get bill titles to work with.  So I used the python requests library to make a call to the NYTimes API.

import requests
import json

with open('cred.json') as cred_file:
    creds = json.load(cred_file)

recentlyIntroducedBillsURL = 'http://api.nytimes.com/svc/politics/v3/us/legislative/congress/113/senate/bills/introduced.json'

#Take url endpoint, return json data
def fetchData(url, key):
    r = requests.get(url + '?api-key=' + key)
    return r.json()

data = fetchData(recentlyIntroducedBillsURL, creds['CONGRESS_KEY'])
bills = data['results'][0]['bills']   #extract bills from data structure

I loaded my API credentials from an external file using the json module.  Then using the appropriate NYTimes API endpoint, could pass the url and my creds to a function that would make the call, then return the json data.  This endpoint returned 20 bills introduced to congress in 2014.  Extracting the bills as a list would allow me to get at their titles .

Next all titles were extracted and added to an array:

titles = []
for bill in bills:

Next all the titles were broken up and all the words placed into a single array:

# Decompose lists of strings into one big
# array where each index = one word
def soupify(obj):
    wordSoup = []
    for item in obj:
        # If this item is a string:
        if isinstance(item, basestring):
            arr = item.split()
            for s in arr:
        # If the item is an array, we have to handle it differently
            print 'notstring '

    return wordSoup

wordSoup = soupify(titles)

I envisioned my soupify function being flexible, so I could put either lists of lists, or lists of strings, or whatever into it, and it would break up all the individual words into one single array.  I didn’t implement it fully, but what’s it’s checking for now is just that each line is a string, then breaking up that string and adding each word to a master array.

Time to categorize those words. Python provides a library called nltk, which was relatively easy to get up and running with. After about 15 minutes of confusion, I managed to install nltk and use it to categorize my list of words using

import nltk

# Expects an array of words
def categorize(soup):
    d = {}
    text = nltk.Text(soup)      # Turn word soup into text
    tags = nltk.pos_tag(text)   # Tag each word with POS

    # Break individual pairs into one large dict
    # categorized by POS (part of speech)
    for pair in tags:
        # group words with like tags into arrays within a dict
        tag = pair[1]
        if tag in d:
            # tag already exists, just append new word
            # Tag doesn't yet exist, create an array
            d[tag] = [pair[0]]
    return d

words = categorize(wordSoup)    # Create dict of categorized words

Finally, I had to come up with a structure for my Bill creation algorithm.  Looking at my list of words, I picked an order of verbs, nouns, adjectives, and and articles that I thought would make sense.  It was helpful to create a helper function to get a random word from my dictionary by feeding it tags that I wanted to choose from.  For instance, I could say “give me a random word with either the tag ‘NN’ (noun), or the tag ‘NNP’ (noun, proper)”.  legislate() is what finally builds the Bill name.  The structure is controlled by an array of arrays, where each index of the parent array represents one word in the final sentence.  So in this example, the first word is a random selection from any of the tags VB, VBG, or VBD.  The second word is a noun with any of the tags NN, NNPS, NNS, NNP.  And so on.  The basic structure ends up being: Verb -> Noun -> Preposition or Conjuction -> Adjective -> Noun -> Preposition -> Determiner -> Noun

def getRandomWord(d, tagList):
    l = []
    for tag in tagList:
        for word in d[tag]:

    return random.choice(l)

def legislate(wds):
    # Structure:
        # 1. VBD/VB/VBG -> NN/NNPS/NNS/NNP -> IN -> JJ -> NN/NNPS/NNS/NNP -> IN -> DT -> VBG 

    sentenceStructure = [
            ['VB', 'VBG', 'VBD'], 
            ['NN', 'NNPS', 'NNS', 'NNP'], 
            ['NN', 'NNPS', 'NNS', 'NNP'], 
            ['NN', 'NNPS', 'NNS', 'NNP']

    sentence = ''
    for i in range(0, 8):
        currentWord = getRandomWord(wds, sentenceStructure[i])
        if currentWord[-1] == '.':
            currentWord = currentWord[:len(currentWord) - 1]
        if i == 0:
            sentence += currentWord.capitalize() + ' '
            sentence += currentWord.lower() + ' '

    print sentence

And here’s some sample output:

Guarding land of national technology as the big
Remove delinquency in certain enforcement against the land
Provide trading of certain brothers of the filibuster
Improve compassionate of categorical southeastern of the forest
Allow beautiful in certain a of the act
Restore resolution of categorical brothers of the executive
« Older posts

© 2016 Jamie Charry // ITP

Theme by Anders NorenUp ↑