Feed on
Posts
Comments

Our feature paper “Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks” is now published online.

Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92.57% and 97.91% for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps.

This paper aimed at solving the vital multi-sensors data fusion problem in both urban and rural areas using cutting-edge deep learning technology and composite kernel methods. The proposed framework yielded competitive classification performance comparing to state-of-art works, which also showed great potentials in accurate land cover classification applications.

Li, H.; Ghamisi, P.; Soergel, U.; Zhu, X.X. Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks. Remote Sens. 2018, 10, 1649. DOI: https://doi.org/10.3390/rs10101649

Yesterday, the practical field trip in the Erg Chebbi ended and the majority of the group arrived safely in Heidelberg. Another part of the group and the equipment are still on their way back via car and ferry.

After several days in the field, we are already very excited to start working on the large amount of multi-source datasets including terrestrial laser scanning, ground penetrating radar, electrial resistivity tomography, sediment analysis, RTK GNSS and borehole measurements. In the following weeks and months, the students will analyze the data in small research groups and bring together their results in order to get new insights into the genesis and dynamic evolution of star dunes.

Finally, the students would like to thank Prof. Olaf Bubenzer, Prof. Bernhard Höfle, Dipl.Geogr. Manuel Herzog and Katharina Anders M.Sc. for providing a wide range of methodical and content-related input and for organizing this great practical field trip in an impressive sand dune environment. Shukran!

P.S.: You might also be interested in reading the previous blog posts from the practical field training.

We are happy to announce that we are going to be part of INTERGEO 2018 in Frankfurt this week on October 16.-18. This event being the “global hub of the geospatial community” has quite something to offer with hundreds and hundreds of innovative companies showcasing their products and visions.

We are joining up with the Federal Agency for Cartography and Geodesy (BKG) at their booth in hall 12.1 (12.1F.017 to be exact).

Feel free to visit us and discover the growing ecosystem around openrouteservice.org with all the many possibilities - we are definitely looking forward to meeting you!

The latest list of functionalities includes not only routing, geocoding or isochrones (faster and better than ever) on an interactive web map (maps.openrouteservice.org), but APIs for those and further services such as time-distance matrix calculations or a POI API (openpoiservice) - all with professional documentation. ORS supports more specialised routing profiles than ever: from heavy vehicles, wheelchairse-bikes to fitness-level biking and others with many options each. Several dedicated ORS instances for disaster response are updated on high frequency. The isochrone service also supports population statistics, there is a QGIS plugin, geoJSON support, a very handy Python library, one for Java and JavaScript and a library for R users. So stay tuned for the future! Everything is open source on GitHub. We also offer a series of JUPYTER notebook tutorials with topics from healthcare access analysis to combining Twitter Analysis and ORS directions.

As a research institution we obviously develop several research prototypes, e.g. for healthy greenquiet routing, Landmark based navigation, routing across open spaces and much more… Let’s get in touch!

On Thursday and Friday our group went further into the Erg in order to extend measurements from the last years on a large star dune arm and then spent one night in a dune camp.

Yesterday was the last day of field work in the Erg.The TLS group completed the data acquisition on a star dune and thereby collected more than 5,000,000,000 single 3D measurements. Moreover, the “SediMen’ group took more than 100 samples of dune sand along various transects through the Erg. These samples will be evaluated in the laboratory regarding e.g  color and granulometry in order to reveal  the origin of the sand.

You might also be interested in reading the previous blog posts from the practical field trip.

Our paper about Deep Learning from Multiple Crowds: A Case Study of Humanitarian Mapping is available online now.

Satellite images are widely applied in humanitarian mapping which labels buildings, roads and so on for humanitarian aid and economic development. However, the labeling now is mostly done by volunteers. In a recently accepted study, we utilize deep learning to solve humanitarian mapping tasks of a mobile software named MapSwipe. The current deep learning techniques e.g., Convolutional Neural Network (CNN) can recognize ground objects from satellite images, but rely on numerous labels for training for each specific task. We solve this problem by fusing multiple freely accessible crowdsourced geographic data, and propose an active learning-based CNN training framework named MC-CNN to deal with the quality issues of the labels extracted from these data, including incompleteness (e.g., some kinds of object are not labeled) and heterogeneity (e.g., different spatial granularities). The method is evaluated with building mapping in South Malawi and road mapping in Guinea with level-18 satellite images provided by Bing Map and volunteered geographic information (VGI) from OpenStreetMap, MapSwipe and OsmAnd.

In comparison with previous VGI and deep learning work, the advantage of our study lies in (i) combining multiple VGI data with technical solutions to deal with the quality issues (i.e., incompleteness and heterogeneity), and (ii) empirically studying the deep learning method’s application in the humanitarian mapping software MapSwipe with a machine-volunteer collaboration mechanism.

The results based on multiple metrics including Precision, Recall, F1 Score and AUC show that MC-CNN can fuse the crowdsourced labels for higher prediction performance, and be successfully applied in MapSwipe for humanitarian mapping with 85% labor saved and an overall accuracy of 0.86 achieved. With the evaluation on real world data in Africa, we found that

(i) combining multiple VGI data significantly outperforms one single VGI data because of the increased sample diversity,

(ii) training of MC-CNN needs sample sets with large enough size,

(iii) MC-CNN can achieve robust learning for different different CNN architectures including LetNet, AlexNet and VggNet,

(iv) MC-CNN saves a large part of labor but keeps high overall accuracy when it is applied in MapSwipe with the machine-volunteer collaboration mechanism.

These findings will benefit the deep learning-based exploiting of VGI data for humanitarian mapping. This work has been conducted in the context of the ongoing DeepVGI project at HeiGIT. Stay tuned for future updates and new results!

Chen, J., Y. Zhou, A. Zipf and H. Fan (2018):  Deep Learning from Multiple Crowds: A Case Study of Humanitarian Mapping. IEEE Transactions on Geoscience and Remote Sensing (TGRS). 1-10. DOI: 10.1109/TGRS.2018.2868748

During the last three days of field work we not only acquired a lot of data at different sites in the Erg Chebbi. We also experienced various weather conditions including two sand storms, rain and sunny and hot days.

Yesterday, one group measured ground control points for satellite and photogrammetric point cloud data with an RTK GNSS all around the Erg Chebbi.

Moreover, ground penetrating radar and borehole measurements were carried out along an electrical resistivity tomography profile in order to combine subsurface information from different sensors.

The terrestrial laser scanning group continued with scanning a star dune in order to obtain a complete 3D star dune model.

We will keep you updated with further posts - stay tuned!

Our new ohsome dashboard is another preview on what is and will be possible with our ohsome OpenStreetMap history analytics platform. Behind the scenes, we added support for the Apache Ignite big data framework and deployed an instance using the full OSM history data of whole Germany on Heidelberg University’s cloud computing infrastructure heiCLOUD. Apache Ignite is an open-source distributed database, caching and processing platform designed to store and compute on large volumes of data across a cluster of nodes. This enables larger and faster queries, e.g. the processing of common requests like counting buildings of a larger city at a monthly time resolution is typically performed in less than one second and even only takes just a few seconds for a whole country.

The dashboard using Germany as example data is based on the Nepal dashboard, but includes more functionality: It is more generic and flexible as it allows custom filtering of all available OSM tags and types, not only buildings or roads. The dashboard now includes administrative boundaries from states down to city-level, which enable one to make an easier selection of relevant search areas. Using the dashboard, generating accurate statistics about the historical development of OSM data for an arbitrary region it is now as easy as pie:

As you may know, the idea of the ohsome platform is to enable intrinsic quality analytics and understand the development of OSM through time and space for both, researchers, as well as the OSM community. We are looking forward to hear your feedback and ideas for improvement. Please feel free to contact us with your comments or to contribute to the open source development. Future work will focus on enlarging the supported region towards a global scale, as well as adding further functionality.

Previous ohsome architecture blog posts:

Selected Literature:

Data acquisition in the Erg Chebbi has now started. In different groups several measurement techniques (drilling, electrical resistivity tomography, ground penetrating radar, sediment analysis, RTK GNSS, terrestrial laser scanning) were applied. Find some impressions in the pictures below.

Last week we have been upgrading the openrouteservice, and with that has come the ability to include elevation information in routing pretty much anywhere on the globe (sorry people on Antarctica, but we don’t have routing for you just yet). So if you want to know the elevation and steepness of your drive from Svalbard airport to the EISCAT Svalbard Radar Station, now you can!

Probably more useful however is that the elevation information is now available for more “accessible” locations such as the north of Norway, and the whole of Sweden. Other places such as Iceland also now have elevation available which could be of great benefit if, for example, you are planning to travel around the country on a bike and want to know where you will be hitting the tough spots.

Before this release, openrouteservice made use of SRTM elevation data as its sole source of elevation which restricted us to providing elevation and steepness information only up to 60 degrees north of the equator. With an update to the service which now makes use of a newer version of Graphhopper, GMTED2010 elevation data can also be used which provides us with the information needed for locations found farther north.

There will be more features coming along with this update, so stay tuned for more information about those!

After the participants of the practical field training had arrived in Fès yesterday and spent the evening there, the group is now on the way to the Erg Chebbi east of the Anti-Atlas mountains. Thereby, the group passes various gemorphologic landscape units including the Western Meseta, Middle Atlas and High Atlas mountains. Moreover, we crossed the watersehd between the catchment area of Moulouya river draining to the Mediterranean Sea and Souss river draining into the Atlantic Ocean.

Tomorrow, data acquisition will start in the Erg. We will keep you updated with further posts.

You might also be interested in reading the blog post from Thursday.

Older Posts »