At the NYC OGP meeting, I demoed my thesis site. Since there isn’t a video of it, here’s a write-up covering the same material. There are also some slides available.
The goal of my Masters thesis is to make all the world’s spatial data accessible. This goal is accomplished by expanding OpenGeoPortal in two significant ways. First, spatial data files are discovered via web crawls and then ingested. Second, the ability to preview and download layers without needing OGC protocols was developed. This expanded version of OpenGeoPortal is on the web at http://WorldWideGeoWeb.com.
Data on WorldWideGeoWeb.com was discovered by crawling the web, relying exclusively on HTTP GET requests. This is the same technique used by Google and other search engines. The WorldWideGeoWeb crawler can be instructed to crawl a specific site. Sites are searched for links to zip files. The ingest code retrieves and unzips these files. If they contain a shapefile, the bounding box is determined using the shp and prj files. Any metadata file is also parsed. Information about each discovered layer is ingested into WorldWideGeoWeb’s Solr instance.
After ingest, OpenGeoPortal’s powerful search interface allows users to quickly and easily find spatial data layers. Preview of shapefiles on the map is based on parsing and rendering shapefiles entirely in JavaScript. It does not use image tiles from GeoServer or ArcGIS Server. When the user selects a layer to preview, the browser sends a request to the server to create a temporary, server-side copy of the zip file. During ingest, the URL of the zip file was stored in the Solr record. This is used and an HTTP GET request is issued to create local copy of the zip file. Then the file unzipped. At this point the browser requests the .shp, .shx, .prj and .dbf elements of the shapefile. They are processed in JavaScript as binary data streams. If the data is not in a suitable projection, it is reprojected on the browser. Then the features in the shapefile are parsed are rendered on OpenGeoPortal’s map. Attributes in the .dbf file are displayed as features are moused-over.
The following screenshot shows WorldWideGeoWeb.com. The search results were discovered by crawling Westchester County’s data web site at http://giswww.westchestergov.com/wcgis/DataWarehouse.htm. The map shows a preview for layer titled ”County Legislative Districts”. The browser debug panel at the bottom of the screenshot shows the network traffic generated by the preview request. The “cacheShapeFile.jsp” ajax call told the server to copy the shapefile from http://giswww.westchestergov.com using an HTTP GET and unzip the results. After the Ajax request completes, the .shx, .shp, .dbf and .prj are requested by the browser and parsed in JavaScript. Transferring this 220 kilobyte layer first to the WorldWideGeoWeb server and then to the browser took just under 2 seconds.
The user can add any of these Westchester layers to the cart and download them. The zip files are transferred directly from the Westchester server to the browser. Clipping the data or converting it to another format are not supported.
WorldWideGeoWeb shows it is possible to build a powerful, interactive portal without requiring data holders create web services. Data only available on web sites designed for people can be ingested using a web crawl and previewed using advanced JavaScript techniques that weren’t available when the OGC protocols were created. Since WorldWideGeoWeb is built on OpenGeoPortal, data available with web services can also be supported.
Limitations
There are significant limitations with the current version of the software. Most notable is its inability to deal with large shapefiles. Currently, shapefiles over one megabyte can cause the browser to hang. Search results are color-coded to advise the user. Green layers are small and should preview quickly. Yellow layers are larger but should preview without too much delay. Layers in red represent shapefiles over a megabyte and should not be previewed. Even these large layers can be downloaded easily downloaded, just not previewed.
My thesis code is not production quality.
Future Directions
During a crawl, only spatial resources in shapefiles are discovered and their associated metadata must be in FGDC or ISO19115. It would be trivial to add support for KML and KMZ files. Support for other file formats and metadata standards could also be integrated.
Crawling based OGC protocols such as Get Capabilities and CSW could be added.
The ranking of the search results could be based on the “Page Rank” of the page that linked to the zip file.
Semi-spatial data such as web pages about places could be ingested and searched spatially.
Other Notes
I now work for Voyager Search (http://voyagersearch.com/). We are investigating how some of these ideas could be incorporated into their existing products. The code created for my thesis has been released under the GPL.
Some data an organization provides may be more critical or widely used. This data could be available via an OGC compliant server while other, less critical data is made available only via HTTP GET.