The MariData project was recently launched to improve the energy efficiency of ship operations and to reduce emissions. Fuel consumption regarding vessel traffic is affected by many factors, namely, the main and auxiliary engines, the propulsion system, ship hull, propellers, seakeeping performance, as well as weather and sea conditions. In order to tackle these problems, we divide the task of building a weather routing system into subproblems and handle each subproblem separately. Finding the correlations between the speed of the vessel and weather conditions would lead to a better understanding of the weather conditions affecting fuel consumption, therefore, we obtained speed data of the ship’s Automatic Identification Systems (AIS), then retrieved weather and sea information for each data point using timestamp and geographic locations. The combination of these data sources enabled us to build a model to better understand the inherent relationships.
WaCoDiS & CODE-DE: The future (of Earth Observation data processing) is now
Efficient Earth Observation data processing relying on CODE-DE
In April 2020, a relaunch of the Copernicus Data and Exploitation Platform – Deutschland (CODE-DE) was announced. This was a big deal, since the second generation of the CODE-DE platform should now offer public access to the long awaited online processing environment. At this time, our WaCoDiS research project was in its final term and we were ready to validate our developments as part of a pre-operational deployment. Thus, the relaunch of CODE-DE just came at the right time.
GeoNode and Kubernetes – a good match?
A closer look at the flexibility and scalability of a GeoNode deployment using Kubernetes and related Cloud concepts.
Mid-2020, we started a project with Fraym – a data science company working with manifold types of datasets to execute projects in countries that are undergoing substantial societal change places around the world where data has been traditionally hard to access. Spatial data, ranging from base data such as administrative boundaries to Earth Observation and satellite data, play an important role. The majority of the datasets will be reused in other project contexts, thus a solution for the management and discovery of spatial and non-spatial datasets is inevitable for the effective execution of data analysis processes. In close cooperation with the experts at Fraym, we developed a data platform based on the GeoNode software stack. The challenges here were a set of requirements:
- Flexibility: supporting the different dataset types such as raster, vector and tabular data
- Scalability: accommodating the large amount and variety of data (multiple terabytes, tens of thousands of more than 100k individual datasets)
- Availability: taking uptime into account (e.g. on-demand bootstrapping vs 24/7 operations)
- Processability: promoting processing functionality close to the data.
In this blogpost, we take a closer look at the first two aspects: flexibility and scalability. We look into the details of our deployment concept using Kubernetes on Amazon Web Services (AWS) and other related cloud concepts.
Trajectory Analytics Toolbox: Final Post
Installation
After 13 fun weeks of Google Summer of Code, learning about package development in R, trajectory data mining, and spatial data mining, I am happy to announce that traviz is ready for use and open source involvement. To download and install, visit traviz and follow the installation instructions detailed in the README. The package website is hosted on Github pages too at the following link. If you would like to see the vignettes detailing an introductory tutorial and showcase of traviz, visit vignette 1 and vignette 2.
- « Previous Page
- 1
- …
- 18
- 19
- 20
- 21
- 22
- …
- 55
- Next Page »