Upcoming Posts

Please use the comments section of this page to share comments, suggestions, questions and requests for new tutorials/posts if there are things you wish to know, or stuff that I can incorporate into upcoming content.

9 thoughts on “Upcoming Posts

  1. Might I suggest code reusability as a topic? e.g. How to set up geoprocessing tools as independent tools that can be re-used elsewhere, or called up in toolchains (importing, use of modules, packages, etc.)

    Marc

    1. Yes, definitely a great topic for a future post! Very tricky to understand, but simple and incredibly effective once you get used to it.

      Have you done this extensively with Arcpy scripts before? Back when I was using OGR pretty heavily I had a whole suite of tools (sitting as modules in a file) that I would call on continuously, but Arcpy seems a little more polished… The main things I use it for with Arcpy is Multiprocessing or setting up default values (for debugging), with a master script normally getting the variables from Arc then passing to the module. Do you have any other good examples?

      1. Well, I’ve been trying to implement a lot of intermediary tools that I tend to use on a regular basis, though some are a bit particular, like:

        – Read all unique instances of a tuple (Field1, Field2, Field3, …) into a list or dict, for planning out mapping scenarios
        For example, (‘Building’, ‘Hospital’), (‘Building’, ‘Residential’), (‘Street’, ‘Highway’) are unique values across two fields, which is not something SelectByAttribute can give you with a single click.

        – Conversion of shapes from source to source (Input is a sequence of databases which describe the MtoM relationship between input and output shapes / fields, which is read into a dict and used essentially as a Mapping Hash. I find myself using this a lot as I need to harmonize the content of multiple databases from multiple sources describing the same kinds of data)
        For example, here in Canada, we have a national topographic standard (Canvec), and provincial standards (OBM for Ontario, BDTQ and BNDT for Quebec, …) that are describing the same kind of data. To harmonize all of this, year after year, we’ve got to create correspondence tables detailing what part of what shape in this source corresponds to something in another source. Python turns this from an academic lesson into a really useful exercise. And I’ve used most of the same code for converting a lot of other sources as well.

        But as you said, arcpy is a pretty polished altogether. Lower-level, I’ve tried to set up my own functons to facilitate benchmarking and logging (using Python’s built-in modules), writing and reading to files and, like you, putting error-checking wrappers around built-in arcpy methods (I find Clip and Append the worst from that point of view…)

  2. Great blog.
    My current problem is getting any Arcpy tools to scale up to large tables.

    A query or process works fine on a small set (less than 1M records), but choke on larger tables.
    I find that I have to partition the datasets. This may be also very hard because the tables may not have a key to create a subset.

    I have tried a number of alternatives, some successful at the expense of complexity.
    Indexing items sometimes helps.
    I am looking at using Sqlite3 as an alternative to file geodatabases for intermediate processing.
    Another technique is to load the table into Python structures, say a dictionary, process and write out. This has worked for a substitute to a join of two tables of 2.3M records. Now only takes 18 minutes, previously failed after several hours or just hangs.

    1. I often do that as well – use dictionaries to store things, then process on the dictionary. It tends to be a lot faster and has got me around some nasty data lock and memory issues when using Multiprocessing… Sometimes, in fact, just the act of adding to a dictionary ‘does’ the desired sorting/calculation, which normally would have been a few Arc operations…

      No experience with Sqlite; how does that compare with using a Python dictionary?

      1. This would be a great topic Stacy.
        I’m also dealing with some large data sets and sometimes the days are just not long enough to wait for a result.

        Maybe I’m just too old-school, but I do miss coverages and the ability to do link relates – so fast.

        So, one vote for a tutorial on features -> dictionaries and back please.

        Mal

  3. Hello,

    Your website is very useful but it’s one of many with great top tips and walkthroughs so keeping a track of all these sites can be a bit of a pain. The best thing I find is subscribing to a RSS feed.

    I was wondering if you could add an RSS feed for this site (e.g. the Node Dangles blog has one) and I can get thunderbird to keep me up to date.

    Just an idea…

    Duncan

    p.s. Yeah I know you have a mailing list but I would rather have an RSS feed!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s