More

Extracting color values from layer to build index using ArcPy/ArcMap?

Extracting color values from layer to build index using ArcPy/ArcMap?


My university wants the maps we create to become more standardized and so we're changing all the colors on our base vector map.

The new standard needs to be indexed and I have the task of creating the spreadsheet/document that will help us in the future.

There are close to 100 layers that I need to index with, for example, the polygon layer name, fill color (rgb values), border color, and border width.

I have only recently started learning ArcPy so I was hoping there would be a GUI method I could follow and if not, what I should be looking for in an ArcPy script.

How does one extract this information?


There is a way but not an easy one, since it involves creating a service definition and reading XML files.

1) Using the arcpy function CreateMapSDDraft and StageService_server you can create a .sd file.

2) Unzip the “service name”.sd file.

3) Unzip the… v101”service name”.msd file.

4) The file… layers”layername”.xml holds the color codes (RGB/CMYK etc.) as seen below

So you can automate the whole process using python.


I agree with comments by @ianbroad:

This sounds like it might be a manual process. I don't believe arcpy has a way to extract the fill color, border color, or border width. Most likely you could get this information with ArcObjects, but someone else would have to confirm that.

and @Fezter:

Would it be feasible for you to save lyr files which store all those things? You could open all your layers into an MXD with all the symbology set. Then, iterate through them and use arcpy.SaveToLayerFile_management

Personally, I would follow @Fezter's suggested method, which I have seen used very effectively at some sites I consult to.


All methods

The Value Method field in the DynamicValue table defines the actions that occur when the Attribute Assistant is enabled and features are modified or created in ArcMap. Four fields in the DynamicValue table (Value Method, Table Name, Field Name, and Value Info) must be configured to use an Attribute Assistant method. The remaining fields define when the Attribute Assistant method should be applied.

If the method you are using creates a new record, that record is not available until all rules have processed for the feature that triggered the rule. The following methods generate new records:

  • Copy Features
  • Create Linked Record
  • Create Perpendicular Line
  • Create Perpendicular Line to Line
  • Split Intersecting Feature

The following 71 Attribute Assistant methods can be configured in your DynamicValue table:

Method Description
Angle Calculates the geographic or arithmetic angle of a line feature.
Autonumber Finds the largest value in a field and calculates the next sequential value.
Cascade Attributes Updates all occurrences of a value when the corresponding value in another table is changed.
Copy Features Copies a feature when an attribute is updated to a specified value.
Copy Linked Record Updates an attribute of a feature with a value from a related table.
Create Linked Record Creates a new record in a feature layer with a relationship to a table using a primary/foreign key relationship.
Create Perpendicular Line Constructs a perpendicular line from the input point and an intersecting line. The line's length is specified by the Length parameter.
Create Perpendicular Line to Line Constructs a perpendicular line from the input point to the nearest line.
Current Username Populates the current user name.
Edge Statistics Provides statistics on a specified field for all connected edges in a geometric network.
Expression Executes a VBScript evaluated by the MSScriptControl. Can be used to access built-in functions and conditional logic (if statements).
Feature Statistics Summarizes the attribute values of the affected feature as a series of statistics or a single calculated value.
Field Copies the value from from one field to another within the same feature class.
Field Trigger Updates a field to a specified value when the value of another field is updated.
From Edge Field Copies a field value from a connected From Edge feature to a connected junction feature.
From Edge Multiple Field Intersection Copies values for all From Edges connected to a junction to a series of fields in the source layer.
From Edge Statistics Calculates statistics on a specified field for all features connected to From Edges in a geometric network.
From Junction Field Copies a field value from a connected From Junction feature to a connected edge feature. Can also copy the name of the feature class at the start of the currently edited line.
Generate ID Increments a row in an unversioned table and stores that newly incremented value.
Generate ID By Intersect Generates unique identifiers for features based on the identifiers of intersecting grid features.
Get Address From Centerline Extracts address information from the closest point on a road. It is similar to a reverse geocode, but a locator service is not used.
Get Address Using ArcGIS Service Performs a reverse geocode using a specified ArcGIS service.
Get Address Using Geocoder Performs a reverse geocode using a geocoder.
GUID Creates a globally unique identifier (GUID).
Intersecting Boolean Stores a value if the triggering feature intersects a feature in the specified layer.
Intersecting Count Calculates the number of intersecting features and stores the count in the specified field.
Intersecting Edge Copies a field value from the first intersecting edge feature.
Intersecting Feature Copies a value from an intersecting feature in the specified layer.
Intersecting Feature Distance Calculates the distance along a line feature where a line is intersected by another feature.
Intersecting Layer Details Extracts the name or file path of an intersecting layer.
Intersecting Raster Extracts a raster cell value at a feature location. If the feature is a line or polygon, the raster value at the feature centroid is used.
Intersecting Statistics Calculates statistics on a specified field for intersecting features.
Junction Rotation Stores the rotation angle of a junction feature based on connected edge features.
Last Value Repeats the last value used in a field.
Latitude Stores the y-coordinate value projected to WGS84 decimal degrees.
Length Calculates the length of line features and the area of polygon features.
Link Table Asset Updates a field in the table or layer with a value from a selected feature.
Longitude Stores the x-coordinate value projected to WGS84 decimal degrees.
Map Info Stores information from the current map document metadata or the version info of the layer being edited.
Minimum Length Rejects a newly created line feature if the length of the line is less than the specified distance.
Multiple Field Intersecting Values Copies values from new intersecting features into a target layer.
Nearest Feature Copies a value from the nearest feature in a specified layer.
Nearest Feature Attributes Copies a series of values from the nearest feature in a specified layer.
Offset Populates the location of a point a specified distance from the nearest line feature.
Previous Value Monitors a field, and when it is changed, stores the previous value in another field.
Prompt Identifies records containing null values. If the field uses a subtype or domain, those options are presented in the dialog box for the user to select.
Set Measures Populates the m-coordinates of line features. M-values can be used to add route events to point and line events dynamically along line features.
Side Determines if a point feature is to the left or right of a corresponding line feature.
Split Intersecting Feature Splits features that intersect with features in a source layer.
Timestamp Populates the current date and time.
To Edge Field Copies a field value from a connected To Edge feature to a connected junction feature.
To Edge Multiple Field Intersect Copies values for all To Edges connected to a junction to a series of fields in the source layer.
To Edge Statistics Calculates statistics on a specified field for all features connected to To Edges in a geometric network.
To Junction Field Copies a value from a connected To Junction feature to a connected edge feature. Can also copy the name of the feature class at the end of the currently edited line.
Trigger Attribute Assistant Event From Edge Triggers the Attribute Assistant for the From Edge feature.
Trigger Attribute Assistant Event From Junction Triggers the Attribute Assistant for the From Junction feature.
Trigger Attribute Assistant Event Intersecting Feature Triggers the Attribute Assistant for the intersecting features.
Trigger Attribute Assistant Event To Edge Triggers the Attribute Assistant for the To Edge feature.
Trigger Attribute Assistant Event To Junction Triggers the Attribute Assistant for the To Junction feature.
Update From Edge Field Copies a field value from a junction to a connected From Edge feature.
Update From Junction Field Copies a field value from a connected edge to a connected From Junction feature.
Update Intersecting Feature Updates a field in an intersecting feature with a value or a field value from the modified or created feature.
Update Linked Record Finds the related records in another table or layer and updates a field in those records.
Update To Edge Field Copies a field value from a junction to a connected To Edge feature.
Update To Junction Field Copies a field value from a connected edge to a connected To Junction feature.
Validate Attribute Lookup Verifies field values against entries in a lookup table.
Validate Attributes Compares values in the input fields to all feature templates for the feature class.
Validate Connectivity Validates the number of connections on a feature and rejects the edits if the criteria are not met.
Validate Domain Validates data entry on field with domains against the domain. If the value is outside the range or not in the coded value list, the edit is aborted.
X Coordinate Calculates the x-coordinate of a feature in database units.
Y Coordinate Calculates the y-coordinate of a feature in database units.

Angle

Calculates the geographic or arithmetic angle of a line feature.

To configure this method, populate the following in the DynamicValue table:

Table Name Field Name Value Method Value Info
Feature class name Field used to store the calculated angle ANGLE

This rule can only be configured on linear features, and the angle value must be populated in Float or Double fields.


Introduction

Anomaly detection for analysing spatio-temporal data remains a rapidly growing problem in the wake of an ever-increasing number of advanced sensors that are continuously generating large-scale datasets. For example, vehicle GPS tracking, social media, financial network and router logs, and high resolution surveillance cameras all generate a huge amount of spatio-temporal data. This technology is also important in the context of cyber security since cyber data carries with it an IP address which can map to a specific geolocation and a timestamp. Yet, current cybersecurity approaches are not able to process this kind of information effectively. To illustrate this deficiency, consider the scenario of a distributed denial-of-service (DDoS) attack in which the network packets may come from different IP addresses with sparse locations. In such a case, a spatio-temporal analyzing system [1] is required to analyse the spatial pattern of the DDoS attack. Yet, user oriented analytic environments for cyber security with spatio-temporal marks are currently limited to traditional statistical methods like spatial-temporal outlier detection and hotspot detection [2]. Footnote 1 Furthermore, much of the current work in large scale analytics focuses on automating analysis tasks, such as detecting suspicious activity in a wide area motion and time interval. But these approaches do not provide analysts of cyber security data with spatio-temporal marks the flexibility to employ creativity and discover new trends in the data while operating over extremely large datasets. Current solutions are prohibitive because they require a multidisciplinary skillset.

One possible solution to performing analytics on such large scale spatio-temporal data is to retrieve the metadata of spatial point patterns [5], and apply metadata processing and storage approaches [6], together with domain knowledge derived by machine learning and statistical means. An added advantage of this method is that meta-data hides the details of the point patterns thus providing privacy while still supporting a variety of analytics.

We, thus, propose a framework for performing analytics with spatio-temporal data that has the following properties:

Privacy protection: We use a meta analysis of tracking data as an indicator of subjects’ behavior. The geolocation of the subject will not be exposed to the system user.

High scalability: We are able to retrieve the behavior pattern for different amounts of data since the Morisita index provides the scalability adapted to different amounts of tracking data.

Convenience: We designed a convenient way to map the anomaly event of the cyber threat to the physical threat since the cyber threat can be visualized on the real map.

In more detail, we propose a framework to store and process large-scale spatio-temporal data over a “metadata based point pattern” infrastructure, while providing users with a metadata analysis that hides the details of large-scale spatio-temporal data and provides them with a front-end interface that allows them to run a variety of security checks including outlier detection for a single subject, anomaly group detection, anomaly behavior detection and anomaly event detection. Furthermore, the spatio-temporal data is stored in various data stores. As a result, this framework provides high-performance analytical features, flexibility, and extensibility.

The theoretical contribution and novelty of our work lies in the combination of methods from the areas of spatio-temporal analysis, machine learning and statistical analysis. By extracting relevant methods from these three fields of research, we created an effective and efficient tool for anomaly detection by monitoring the cyber and physical levels, simultaneously.


GEOG 390: Unmanned Aerial Systems


The goal of this assignment was to work with processing multi-spectral imagery in Pix4D and using that imagery to complete a value added data analysis of vegetation health for a site in Fall Creek, Wisconsin.

The camera used to capture the imagery for this assignment was a MicaSense RedEdge 3. This camera is capable of taking five photographs in five different spectral bands. This technology allows for more precision in agriculture and vegetation analysis than a standard RGB sensor. The five different bands in order from shortest wavelength to longest are as follows: band 1 is the blue filter, band 2 is the green filter, band 3 is the red filter, band 4 is the red edge filter, and band 5 is the near infrared (NIR) filter. The RedEdge camera also requires specific parameters for proper image capturing and analysis. The following table (table 1) is a list of those parameters from the RedEdge user manual.

Table 1: MicaSense RedEdge 3 sensor parameters

Methods

The first step for this assignment was to process the flight imagery taken from the site in Pix4D. This was done using the same methods as previous assignments, however this time, the Ag Multispectral template was used. This creates five orthomosaic geotiffs, one for each of the spectral bands. In figure 1, the template is shown to be set to Ag Multispectral. This didn't automatically produce the orthomosaics that were needed for further analysis, so the orthomosaic geotiff and subsequent options were checked.

Figure 1: Pix 4D processing options

Once processing concluded, the next step was to compose all five of the spectral bands into one RGB orthomosaic. To do this, the various geotiffs for each of the bands were brought into ArcMap and the "composite bands" tool was used. This tool works by entering each of the five spectral bands as input rasters. Then the user simply assigns a name and location to the output raster and the composite is created.

Figure 2: Composite Bands tool in ArcMap

Figure 3: Adjustments to the symbology in the layer properties for the RGB composite
Figure 4: Various composite layers
Three maps were then produced in ArcMap with the different multispectral layers (see results section). From there, the next step was to perform a value added data analysis in ArcGIS Pro. This analysis shows permeable and impermeable surfaces for the given site. To do this, steps for value added data analysis from assignment 4 were used in conjunction with the data from this assignment.

The first step was to segment the imagery (figure 5). This makes the spectral band values less complex and better helps the user break up the imagery into permeable and impermeable surfaces. Segmenting the imagery in ArcGIS Pro was done by bringing in the composite raster and following the prompts from the tool.


Figure 5: Segmented Imagery

Figure 6: Surface classification

Figure 7: Training sample manager with 5 custom classes

Figure 8: Result of classification
After the imagery was classified, the final step was to undergo reclassification in which the classified imagery was categorized into pervious and impervious surfaces (aka permeable impermeable surfaces). This was done by entering values of 0 for impervious surfaces and 1 for pervious surfaces (figure 9 and table 2).
Figure 9: Reclassify tool

Table 2: Pervious and impervious reclassification
This resulted in a value added rendering of the original data which was used to make a map showing the pervious and impervious surfaces of the site.

Lastly, a normalized difference vegetation index (NDVI) map was created. This map shows health of vegetation, and is analyzed in a similar way to the false color maps made for this assignment. Since the Ag Multispectral template was used when processing the imagery in Pix4D, an NDVI raster was produced. The map was made by layering the NDVI with the DSM also produced in Pix4D and using a hillshade effect.

Results

Map 1: RGB Orthomosaic

Map 1 shows a conventional red, green, and blue band orthomosaic image. Due to combining the images of each color band together and having somewhat poor images to work with, some of the areas on the map appear to contain more red hues than in real life. Still, the viewer can make out what objects in the image represent and can interpret the "pinkish" grass-covered area in the southern portion of the image as an area with poor vegetation health. If these areas contained healthy vegetation, they would most likely be green, much like the crop field on the western portion of the map. Since this image is in a standard RGB display, there is a possibility of the poor vegetation areas and the unusual pink hue shift being attributed to the quality of the images taken in this flight. Perhaps a different color band rendering will help to determine uncertainties with map 1.

Comparing map 3 to map 2, there isn't much difference between the two. Both are false color renderings except, one uses the RedEdge band and the other uses the near IR band. Using the near IR band saturates the areas of healthier vegetation even further. It appears this helps to distinguish large areas of healthy vegetation from large areas of unhealthy vegetation, however some of the finer detail in vegetation health variance is lost by this saturation. The hedge between the two properties and the unkempt area to the right of the homeowner's lawn are good examples of this loss due to color saturation.

Map 4: Value Added Pervious and Impervious Surfaces

In map 4, areas in blue show pervious (or permeable) surfaces and areas of khaki show impervious (or impermeable) surfaces. Some of the impervious areas near the top right of the map are in fact pervious, but ArcGIS Pro interpreted them as being impervious. This could be due to the quality of the imagery or user error when classifying the segmented imagery. Also, the entire boarder surrounding the image got accounted for as an impervious surface, however this area shouldn't have been included.
Map 5: NDVI raster
The fifth and final map of this analysis is of course map 5. Because the Ag Multispectral template was used in the image processing with Pix4D, an NDVI raster was produced. With the gradient used for this map, features of the landscape become enriched and areas of poor vegetation health are shown in a rusty red color while areas of good vegetation health are shown as indigo. Over larger portions of vegetation with similar health, such as the southern area of grass, the user is able to see in great detail variances in vegetation health. There also seems to be minimal false interpretations of values in this map with the only real exception being the shadow cast by the house.

Conclusions

It is clear that using the Ag Multispectral template in Pix4D as well as the MicaSense RedEdge 3 sensor is a fantastic option for farmers, biologists, golf course management, and other similar applications. This technology allows the user to really gain an in depth analysis of the health of their vegetation. Set backs to this technology would include the large potential for false information from user error. During the flight that collected the images used for this analysis, the pilot accidentaly had the camera on while the UAV was climbing to its planned flight altitude. Mishaps such as this have an effect on the quality and accuracy of the analysis. If I could do this assignment over again, a potential fix for this mistake could be to eliminate those images from processing altogether. If images from UAV are taken with great care and accuracy and the user completes all the data manipulation steps correctly, this technology has the potential to provide cutting edge agricultural and other vegetation-based information analysis.


Technical Validation

We found that 62,758 (93.5%) of 67,141 available plots were used at least once for imputation. In assessing the accuracy of the imputed dataset, there are several pertinent questions. The ultimate measure of agreement is how well the imputed dataset replicates conditions on the ground, which can be assessed at selected locations by comparing the attributes of the output grid of plot IDs to a set of recently measured FIA plots. Since the target LANDFIRE data were based on satellite imagery for the year 2014, the imputed dataset also has a vintage of 2014, and we used a subset of the FIA plots measured in 2014 to assess the accuracy of the imputed dataset (using FIA plots from previous years would not account for subsequent growth or disturbances between the time the plot was measured and 2014). We obtained the locations of 2,319 multi-condition FIA plots measured in 2014 and leveraged these in the validation of the imputed dataset. The accuracy of the imputed dataset depends heavily on the predictor variables that were drawn from the LANDFIRE target data hence a second issue is how well the target LANDFIRE data itself compared to the conditions on the ground, which can be assessed using the same set of FIA plots from 2014. Errors and inaccuracies in the LANDFIRE target dataset will naturally propagate to the imputed dataset. Logically, then it also makes sense to compare the imputed dataset to the LANDFIRE target data on a pixel-by-pixel basis for all 2,841,601,981 pixels if the methodology is performing well, then the values of forest cover (EVC), height (EVG), and vegetation group (EVG) derived from the imputed forest plots will correspond well with the values in the target LANDFIRE data. Agreement was measured in terms of the producer’s accuracy (probability that a category on the ground received that classification in the imputed map) and the user’s accuracy (probability that the imputed class in fact represents that class on the ground). In summary, validation was conducted to quantify the agreement between: 1) the plot conditions measured on the ground by FIA at the locations of the 2,319 multi-condition plots and the imputed dataset at these same locations, 2) the plot conditions at the locations of the 2,319 FIA plots and the LANDFIRE target data at these same locations, and 3) the LANDFIRE gridded target data and the imputed gridded data. Because we added disturbance code as a new predictor variable, we also assessed its accuracy by comparing the imputed grid and the LANDFIRE target data.

Agreement between the FIA reference data and the imputed dataset

We checked for matches in three different attributes to quantify agreement between the gridded imputed dataset and the FIA reference data, at the locations of 2,319 multi-condition FIA plots measured in 2014: 1) the forest cover (EVC), 2) the forest height (EVH), and 3) the two tree species with the highest basal area. We chose the last measure in lieu of the EVG, as tree species are a direct and measurable characteristic of a forest plot and are not subject to any uncertainty in the vegetation group (EVG) categorization. The splayed footprint of a single FIA plot (Fig. 2) is 40.25 m in radius and spans several 30 × 30m pixels 4 . The combined area of the four subplots is 672 square meters, a figure relatively close to the size of a single pixel of 30 × 30m imagery (900 square meters). We checked all pixels whose centroid fell within the FIA plot radius for matches in these attributes (EVC, EVH, and tree species), since one or more of the four subplots may have fallen on these pixels. For each pixel within an FIA plot’s radius, we used the plot identifier number of the imputed plot to look up the corresponding values for EVC, EVH, and tree species. We recorded whether the EVC and EVH values of at least one pixel within a plot’s radius matched the value calculated for the plot. As another measure of imputation accuracy, we calculated whether the weighted cover value of pixels within the plot’s radius was within 10% of the plot value, and whether the weighted height value was within 5 m of the plot value. In order to evaluate whether the species composition was similar in the FIA and imputed data, we calculated the basal area of each live tree using the diameter (DIA field in the TREE table of the FIADB), which was then multiplied by the number of trees per acre (TPA_UNADJ field in the TREE table), and basal area was summed for each species on a plot using the species code (SPCD field in the TREE table). We then identified the species with the top two basal areas for each plot, and checked to see if any pixels within the plot footprint had at least one of the same top two species.

Of the 2,319 multi-condition plots obtained for the validation, 2,858 had at least one forested pixel within their plot radius (98.1%). Several possible reasons exist for this discrepancy, including that because the mask of forested pixels was derived from the LANDFIRE target data, there may be instances where a pixel is forested but LANDFIRE did not classify it as such or FIA may have measured trees on the plot but total cover may have fallen below the threshold of 10% cover. The cover bin of at least one pixel within a plot’s radius matched the plot value in 44.0% of cases, and the weighted cover value was within 10% of the plot value in 48.7% of cases (Table 2). The height bin of at least one pixel within a plot’s radius matched the plot value in 85.7% of cases, and the weighted height value of pixels within the plot’s radius was within 5 m of the plot value in 70.3% of cases. At least one of the two species with the highest basal area on the plot was also one of the top two species on at least one pixel within the plot’s radius in the imputed dataset in 76.7% of cases. While tree species itself was not either a predictor or response variable, the imputed dataset predicts tree species with a fairly high level of skill.

Agreement between the FIA reference data and the target LANDFIRE dataset

We repeated the analysis described in the section above using the target LANDFIRE dataset instead of the imputed dataset. The rates at which matches occurred were similar for EVC and EVH variables whether comparing the plot values to the LANDFIRE grids or to the imputed grid (Table 2). Specifically, the cover value of at least one pixel within a plot’s radius matched in 43.7% of cases (versus 44.0% in the imputed data), while the weighted cover value of pixels within the plot’s radius was within 10% of the plot value in 48.7% of cases (versus 48.7% of cases in the imputed data Table 2). The height value of at least one pixel within the plot’s radius matched in 85.4% of cases (versus 85.7% of cases in the imputed data), while the weighted height value of pixels within the plot’s radius was within 5 m in 70.2% of cases (versus 70.3% of cases in the imputed data). It would appear that the accuracy of the imputed dataset in the cover and height categories is heavily driven by the target data, a question investigated more thoroughly in the next section.

Agreement between the target LANDFIRE and imputed datasets

Here, we compared the gridded input LANDFIRE data and the gridded output imputed dataset on a pixel-to-pixel basis because both datasets are 30 × 30m grids. Since the target LANDFIRE data were used to generate the suite of predictor variables used to choose the best-matching plot for each pixel, if the random forests imputation performed well, the values of these variables should be similar in the imputed data.

From the raster of imputed plot IDs, we generated rasters based on the plot characteristics: one raster for cover (EVC), one for height (EVH), and one for vegetation group (EVG). Each of these rasters was then combined with the LANDFIRE target raster and the values compared on a pixel-by-pixel basis as a measure of imputation accuracy.

The forested mask of the LANDFIRE data included 2,841,601,981 pixels to which we imputed FIA plots. The imputed raster had the same forest cover class as the LANDFIRE cover raster on 97.2% of pixels. Agreement between the target and imputed data was above 92% for eight of nine cover bins, with the lowest producer’s accuracy at 79% for the 95% cover bin (Table 3). This bin had many fewer plots available for imputation (Figs. 5 and 6), thus it was more difficult for the random forests algorithm to match the cover values while simultaneously matching height and vegetation group (the other two response variables). Indeed, producer’s accuracy tended to increase with the number of plots available for imputation in a cover class (Fig. 6). The proportion of the landscape falling into each of the nine cover bins was similar across the three data sources (FIA plots, LANDFIRE data, and imputed dataset) (Fig. 5). However, the proportions were more similar between the imputed and target data than to the FIA plots. Since FIA plot locations are likely representative of the landscape as a whole, this suggests that LANDFIRE may have underestimated the number of pixels in the cover classes with midpoints of 15%, 55%, 65%, and 95%, and overestimated the number of pixels in the 75% and 85% cover classes (Fig. 5).


Watch the video: Clip Raster in ArcMap Basic processing in GIS