Menu

Monday, November 30, 2015

Mapping Community Assets in Mumbai with Google's Places API

I mentioned earlier that I was working with a team in a design studio focused on the waterfront in Mumbai.  (Our work will be published in print and at resilientwaterfronts.org.) Mumbai is absolutely fascinating and words cannot describe what it is like to be there and see all of the activity taking place.

The island city of Mumbai is the heart of its financial and economic district. Its shape as a narrow peninsula directly impacts how the city has developed. Mumbai is twice as dense as New York City, and five times as dense as Shanghai. I've found it useful to compare Greater Mumbai and the island city to New York's five boroughs and Manhattan Island.




Within the island city, our study area is the old port on the eastern waterfront. A new modern port is currently in operation across the harbor in Navi Mumbai, and the older port on the island city has much shallower waters that can't accommodate large container ships. The focus of our studio is to study the 1,800 acres that comprise the old port that lie within the heart of one of the densest cities in the world underutilized. The government of India has been building a lot of momentum in looking at redeveloping at least 2/3 of this area.  In a city as dense and active as Mumbai, an opportunity like this is a huge one-time chance to shape the future of this city and open up even greater potential for its development.

The base map below, created by the designers in my studio team, highlights the 1,800 acre port on the eastern coast of Mumbai that we are studying:





Before traveling there we did a lot of prep work gathering data on the existing conditions. However we found that Mumbai was a case in which finding GIS data can be hard to come by.

Part of our planning process started with gathering an inventory of the existing community assets. Open streets map (OSM) has a lot of data but I found that it wasn't 100% complete, nor was it specific to the topics we wanted to focus on. I decided to use the Google Places API tool to find the data we needed to help complete a community assets inventory.

With the Google Places API I was able to search a few areas and grab results for high schools, churches/mosques/temples, and hospitals. The API however only limits you to 60 results which is unfortunate. I mitigated this restriction by defining a short radius of about a 1/4 mile around several points along the study area.  In this case we used the train stations for the Harbor Line, which runs down along the eastern coast of Mumbai.

The Google Places API has some documentation Here.  Basically you can sign up for an API key that will allow you to use the tool. Sometimes APIs, like this one area really easy to use, even if you don't have any programming knowledge at all. Basically you enter all the information you want into the search bar and fill in the various variables such as "Lat=" & ""Lon=" and then a key word or topic like "keyword=churches".  You'll get a result thats probably in XML or some other code that you can save to your computer and open in Excel using the paste special command and "Unicode text" or "XML" to format the results.

Here's an example of the url I typed in (no spaces all one long line) for a lat/long of 18.9442, 72.835 to find all the art galleries in an area:

"https://maps.googleapis.com/maps/api/place/nearbysearch/xml?location=18.9442,72.835
&radius=1000
&keyword=%22art%20gallery%28
&key=Secret_API_Key
"

APIs can be tricky and you'd need to know some programming if you wanted to include the information on a website. However if you are like me and you just want to output the data in a csv file or something simple from a series of results it can be pretty simple. You just type in the url, save it as an xml file, and then you can open it in excel.


In this case I only had about 10 points to search, and up to 3 pages of results for each. If I wanted I could have done this manually without too much time or effort.

However if you need to enter in a very long list, lets say 800 addresses, then you would want to run a script in Python or something that could automatically take that list, query all of the results for you (by plugging the address into the searchbar format that the API wants) and then downloading all of the results into a file. This can be a little trickier sometimes, but its definitely not too difficult to learn. If you are intimidated by coding, you can still use APIs by manually entering in a handful of searches.


From there the results could be placed back on a map in GIS using the Lat/Lon coordinates in the results file and here are the results:





We were able to get a decent list of results by using the this API and mapped the results.  We've since combined these results with other data, such as GIS and population density, as well as neighborhood shapefiles that we digitized into GIS from other sources.



Friday, November 27, 2015

Agent-Based Modeling with Python and ArcMap

I spent a few hours last week creating an agent-based model using Python and ESRI's ArcMap (GIS). I was inspired by a presentation I had seen last year based on a publication in the Philosophy of Science by Michael Weisberg and Ryan Muldoon.

Weisberg and Muldoon created an agent based model that compared the paths of discovery for two types of scientists: followers, and mavericks.  They wanted to identify which might be more efficient at making scientific discoveries, and create identify any interesting patterns of the "path of scientific progress" for either approach,

Basically they created an elevation layer with two peaks. The rise in elevation at a point indicated an increase of knowledge, or scientific knowledge. The peaks represented scientific discovery.

Here is a sample image of the elevation layers from their paper:




The second piece of the model added, were the "scientists." Several scientists were randomly distributed throughout the grid layer. Two different models were created in which the scientists used would all act one of two ways: they would either follow, or a maverick.

Each time a follower moved from one grid cell to the next, they "discovered" the value of that cell. Then they would look around at all 8 neighboring cells. From there followers would pick the highest value discovered when possible,

Mavericks behaved in the opposite fashion.  They would always first pick the unexplored cell if possible.   If all cells have been visited then they pick the highest value.

Weisberg and Muldoon discovered that within their model, the mavericks found the discoveries more efficiently, and more often than the followers.  When mixed with followers, mavericks also helped guide them more quickly to new areas of progress.

Here is another illustration from Weisberg and Muldoon's "Epistemic Landscapes and the Division of Cognitive Labor," It depicts (A) the paths created by a set of "followers" in the model from their starting point (B) toward the peaks:





I just spent a few hours last week creating a similar model. I did not create the layer of logic that would create decisions as complex as the mavericks or followers but I did create an agent based model in which a "scientist" would move around specific locations of a grid until it found the point of highest elevation. In place of complex decision making, I instead moved the scientist randomly among the neighboring cells. I also made sure that the agent would not backtrack a cell already traveled, nor would the scientist move off of they boundaries of the grid.

Here is an excerpt of my script, but the entire text can be downloaded (as a py file) HERE.




The final result of my model could consistently find the highest point of elevation, however you'll be able to see that the path was extremely inefficient.  The "scientist" explored nearly the entire grid in this example before finding the highest point.

White: Explored areas. Gray are lower value and white higher.
Black: Unexplored.




Weisberg and Muldoon's paper, "Epistemic Landscapes and the Division of Cognitive Labor," published in the journal of Philosophy of Science, can be found HERE.


Thursday, November 19, 2015

Constructing 3D site models.

I am currently going through some of the introductory aspects of site planning. The process has shown how useful it is to create a 3D model in order to gain some insight into the scale of an area and its surroundings.

As an exercise I am studying the area at 11th and Market, which is currently the site of the proposed East Market development currently under construction:








I quickly took the building footprints layer and through a series of steps exported the layers from GIS to Sketchup. I then just extruded the layers up to the total building heights. Sketchup is pretty amazing and with a little more time, you can really make some great 3D renderings of a concept. This model was a quick, simple example to create a sense of scale for the area. (For city hall, I went ahead and grabbed a previously built model, that had some finer details of the building from Sketchup's model warehouse and scale it into this model.)





In addition to the 3D model that you can zoom in and view, I also placed the building footprints into Auto CAD, and laid out a template to send to our laser cutter. A number of layers were set up to score the building footprints in the base, and also cut out a number of layers of the footprints themselves to glue and stack on one another.  Pictured below are a few photos of the before and after process of creating the model from 1/16" chipboard.






Tuesday, November 3, 2015

Updates Coming Soon....

Its been a while since I posted on here, but I have a bunch of updates coming.

Here are a few topics that I plan to post about:


Mumbai's Eastern Waterfront:

I've been working with a group to develop a possible future plan for the development of Mumbai's eastern waterfront.  Over recent years India's government officials have begun to seriously consider opening up at least 1,000 acres of this underutilized area to future public development.

Out work has included analysis of existing conditions, research on site, and we are currently working on a future master plan for the development of the area.

Our work is part of a series of studio workshops that will be published online at: http://resilientwaterfronts.org/


Building "From Scratch"

I'll be writing about a few sources and methods to creating data for analysis in areas where GIS and other data might be commonly available.  I'll look at sources such as OSM, Google's APIs, digitizing and image classification, on site data collection.  Additionally I'll cover some examples of interpreting data, forecasts and reports into tailored graphs and visualizations.

Site Planning

Some samples of an introduction to site planning.  Includes applications in CAD, Sketchup and creating a 3D site model.

"What Lies Beneath"

My summer photography project has been finalized and I will be running prints in December.  Photos of the final product and related activities will be posted soon.



Finally, I've found some pretty stark differences between looking something up online, and seeing it in person.  Here's one illustration of what I found: