I am not sure that this is what the Google Earth engineers had in mind when they released it, but new COM API exposes a method called “GetPointOnTerrainFromScreenCoords”.
If you pass this method normalized screen coordinates from the GE Render window, it will return the Latitude, Longitude, and elevation of that point in Google Earth. This opens up an enormous amount of potential for using Google Earth as your GIS interface.
Google has always maintained that they are not a GIS, and they are right. But they have a great visualization system. All that is needed is a way to interact with their interface in a way that makes sense for GIS types of applications. All of the GIS activity can than go on behind the scene.
As an example, I can now click anywhere on Google Earth and capture the coordinates. I can also drag a rectangle across the screen and define its actual location within GE. I can then pass this data back to GIS representation (such as a shapefile) of the loaded KML data. The result is a color coded selection set of data, or a new window that holds whatever feature data I would like to see from another server (or local data).
I can also use the coordinate to calculate a buffer that is loaded in as KML. I can also use this data to select other data.
You can also create very quick sketches directly in Google Earth that can be saved as shapefiles, or sent out to other clients for collaboration. Much more to come on this.