Fesowola O. V. Akintoye mnis,rs,mSCgis, msc gis & Env.
fesowola@glogeomaticsnigeria.com.
Poverty is defined as a state of lack of resources for quality life, lifestyle and livability. It could also be a state of lack in the midst of plenty or resources outside of collective reach. Researchers have over the years identified poverty to have economic, social and political dimensions, but of recent have began to explore the issues of environmental poverty more as a direct indicator of the first three dimensions.
Mapping environmental poverty can provide spatial tools and intelligence support for understanding the causes of poverty.
It has been established that there is a link between environmental poverty and physical planning, and that a well planned environment could still reflect high level poverty if economic, social and political poverty exists around it. Example of this is the usual emergence of slums around cities and industrial districts of the world, despite putting in place effective internal plans and controls.
Recent Geoinformation challenges have been said to include mapping intangible resources across space and poverty being one of them has become a necessity for studying due to high incidence of poverty across the globe, especially in developing and underdeveloped countries.
No wonder, the United Nations listed poverty eradication as one of the major areas of interest in this century.
Environmental poverty like the other realms of poverty can not be measured or mapped directly but by defining and measuring the direct indicators. Environmental poverty indicators are:
1. Lack of access into and within an environment.
2. Utilization of public spaces.
3. Utilization of buildings, roads river/stream setbacks.
4. Inadequate public utilities and infrastructures.
5. Uncontrolled and incessant changes in planned land uses.
Example of environmental poverty indicators are visible in Ebute meta, Lagos Island and Ikeja in Lagos state. This localities used to be high brow residential areas and the abodes of the middle class for over three decades. Mapping environmental poverty in these localities will provide insight into the causes and help in environmental restoration.
Mapping environmental poverty requires extraction of thematic data from multi-date imageries and orthophotos; and spatio-temporal analysis of the environment to detect the indicators. The derived information can be combined with socio-ecomic and demographic data from field enumeration surveys for a comprehensive analysis of poverty in societies.
Geoinformatics provide arrays of technology capable of detecting and depicting the indicators of environmental poverty using time series analysis for knowledge generation and providing understanding of the underlying issues that pushes the environment into the shades of poverty.
Monday, January 12, 2015
Mapping National Resources: The Nigerian Experience
National Resources Mapping
Fesowola O. V. Akintoye mnis, RS, mSCgis, msc gis & Env.
fesowola@glogeomaticsnigeria.com.
Resources could be tangible like minerals, vegetation, marine etc. and intangible like market potentials for businesses; human capital, culture etc. or indicators of socio-economic phenomena across space. Every society has natural resources which nature has endowed the environment with and also intangible resources that emanates from impacts of population size, human activities and socio-economic potentials. Non tangible resources are deductive and are secondary data/information that are indicative of critical underlining events and activities that if understood can help provide knowledge for decision support for governments and businesses.
Geoinformation practices have evolved around mapping tangible resources which includes all land features; buildings, roads, utilities, vegetation etc. all that we used to present as mare features on maps are now socio-economic and environmental resources that requires collection of their attributes for resource managements and decision support. A roadside drain for example (considering its depth and width) is a resource that could be linked to household wastes, flooding, security etc. Also a road could be a resource for accessing buildings, facilities, hospitals or transporting goods and services from one location to another etc. So, modern day maps are not location coordinates visually represented by a graphic legend.
The major challenges lie with mapping intangible resources. These resources are not commonly visible even though they exist around us, but can be revealed through either field enumeration excercises (socio-economic) and generation of resource information from acquired data; and or collection of environmental data that can be synthesized to deduce facts that could be given geospatial dimensions and entities.
Intangible resources provide insights to environmental and socio-economic problems, provide opportunities for geo statistical modeling, used for building models that inform distribution and patterns of human activities, problems and needs.
Mapping intangible resources have been done in the past using spreadsheets and traditional statistics models; but today every acquired data is being strengthened by geolocation attributes. Adding geographic coordinates to resource attributes allows for revisiting every resource and to assess the impacts of government inputs in solving the identified problems. For example, communities or households identified as poor or very poor during a socio-economic survey can be revisited and assessed years later for accountability towards eradicating poverty.
Cost of mapping intangible resources are quite lower than the tangible resources especially with open source tools available for data collection using smartphones ( using Google's ODK, Magpie etc.); data editing and processing (libre Office, Appache Open Office, Abiword etc. ;data manipulation using (QGIS, Grass, gvSIS, Mapwindow GIS etc.), database and server tools( PostgreSQL, MySQL Etc.); and Geoweb tools(Geoserver, Mapwindow, GeoWebCashe-Boundless etc.).
Most resources published by National Bureau of Statistics (NBS) and the National Population Commission (NPC) were generated from intangible (attributes) data collection without geolocation attributes. Mapping of accurate and appropriate intangible resources contributes tremendously to the big data business and governance.
The GI community stands to gain from focusing on the potentials available for providing knowledge for decision support in both business and governance leading to more jobs for young GI professionals and business potentials for all of us.
Fesowola O. V. Akintoye mnis, RS, mSCgis, msc gis & Env.
fesowola@glogeomaticsnigeria.com.
Resources could be tangible like minerals, vegetation, marine etc. and intangible like market potentials for businesses; human capital, culture etc. or indicators of socio-economic phenomena across space. Every society has natural resources which nature has endowed the environment with and also intangible resources that emanates from impacts of population size, human activities and socio-economic potentials. Non tangible resources are deductive and are secondary data/information that are indicative of critical underlining events and activities that if understood can help provide knowledge for decision support for governments and businesses.
Geoinformation practices have evolved around mapping tangible resources which includes all land features; buildings, roads, utilities, vegetation etc. all that we used to present as mare features on maps are now socio-economic and environmental resources that requires collection of their attributes for resource managements and decision support. A roadside drain for example (considering its depth and width) is a resource that could be linked to household wastes, flooding, security etc. Also a road could be a resource for accessing buildings, facilities, hospitals or transporting goods and services from one location to another etc. So, modern day maps are not location coordinates visually represented by a graphic legend.
The major challenges lie with mapping intangible resources. These resources are not commonly visible even though they exist around us, but can be revealed through either field enumeration excercises (socio-economic) and generation of resource information from acquired data; and or collection of environmental data that can be synthesized to deduce facts that could be given geospatial dimensions and entities.
Intangible resources provide insights to environmental and socio-economic problems, provide opportunities for geo statistical modeling, used for building models that inform distribution and patterns of human activities, problems and needs.
Mapping intangible resources have been done in the past using spreadsheets and traditional statistics models; but today every acquired data is being strengthened by geolocation attributes. Adding geographic coordinates to resource attributes allows for revisiting every resource and to assess the impacts of government inputs in solving the identified problems. For example, communities or households identified as poor or very poor during a socio-economic survey can be revisited and assessed years later for accountability towards eradicating poverty.
Cost of mapping intangible resources are quite lower than the tangible resources especially with open source tools available for data collection using smartphones ( using Google's ODK, Magpie etc.); data editing and processing (libre Office, Appache Open Office, Abiword etc. ;data manipulation using (QGIS, Grass, gvSIS, Mapwindow GIS etc.), database and server tools( PostgreSQL, MySQL Etc.); and Geoweb tools(Geoserver, Mapwindow, GeoWebCashe-Boundless etc.).
Most resources published by National Bureau of Statistics (NBS) and the National Population Commission (NPC) were generated from intangible (attributes) data collection without geolocation attributes. Mapping of accurate and appropriate intangible resources contributes tremendously to the big data business and governance.
The GI community stands to gain from focusing on the potentials available for providing knowledge for decision support in both business and governance leading to more jobs for young GI professionals and business potentials for all of us.
Friday, January 9, 2015
What Google Did : Serving Maps Online to everyone
"In the future, GIS users could work on GIS data by using their web browsers without installing a GIS software on their local machines" Penguin, Z-R and Tsou M-H (2003)
"(InternetGIS systems) come very close to delivering the functionality expected by a professional working with GIS, and, providing that response times are sufficiently fast and high-quality printouts are available (through downloadable PDF or post-script files for example, they can replace a desktop GIS for many purposes" Jones, C. B. and Purves, R. S. (2008)
Professional and academic opinions have differed over the years on the place of Geoweb 2.0 standards operators and the improvements made with Geoweb 3.0. This have dragged going to a decade now and then one begins to wonder why the differences despite the acceptance of the deliverables across the globe.
What we have today on Geoweb is data resources seamlessly in plug-and-play mode with other online spatial data sources; online spatial data processing services interoperating distributed databases, chaining together with other processing services to form loosely coupled applications allowing users to query online catalogues to discover the data services and processing the services they need.
Geoweb technically means bending the facilities offered by the web to suit the needs of geospatial data processing, incorporating web-mapping and web based Geo-services.
The emergence of no-cost services such as Google Maps, Yahoo maps, Microsoft's Bing Maps, Virtual Earth, Open Layers etc. has added populist element to the Geoweb. Anyone with the fairly minimal knowledge of computing- Activistss, hobbyists, whoever- can access the Web's rich sources of spatial data and create their own online mapping.
We are in post Geoweb 2.0 era that implements interactive/democratic web/human to machine communications incorporating Wikis, Blogs, Mashups, Volonteered information and AJAX Applications. Google Maps, Yahoo maps, Microsoft's Bing Maps, Virtual Earth, Open Layers and Neogeography are providing Geoweb 2.0 Services. Over a hundred other mapping services are listed today.
Geoweb 3.0 technologies are already being implemented without having a convincing edge while visionaries are already discussing what Geoweb 4.0 will look like.
It's estimated that the release of Google Maps/Earth has increased GIS users (broadly defined) from 1 million to 100 million, Sui 2008.
It is however important that we understand that GIS is a system that solves problems based on the ability to query a built mirror image of life's events. Which means you first model the reality of the events and activities and extract intelligent information and knowledge.
The problem solving ability of GIS means it is a tool and not a science. However, there are several scientific disciplines like physics/electronics, material science, statistics, mathematics, computer science/ programming and geodesy (projection) that provide knowledge to build an efficient systems. Their inputs have brought GIS to where we are today. That helps us to understand were Cache, Apps etc are coming from, making near realtime map generation possible by online mapping services providers the edge over the 1.5 Geoweb applications and taking maps into the door step of interested users. Google and other online mapping services have not done anything beyond serving maps and providing apps for customizing individually motivated deliverables by providing apps and other gadgets.
How Google revolutionized mapping service.
When it was released on 7th February 2005, Google Maps immediately made other online mapping services look desperately old fashioned. Previous mapping systems, based on Geoweb 1.5 approaches required that an entire web page be re-generated each time the user make alterations to an online map. We were used to clicking and waiting. Google now offers what has come to be known as " sloppy map interface. By moving the mouse, users can slide the map around the map window, and unless the map is moved very rapidly, the map seems to be seamless.
Google maps, like other online mapping services, operates within conventional browsers, requiring no plg-ins, no flash etc. it is ultimately a thin client solution. It is therefore important to explore how current mapping services are managed.
The answer is that rather like AJAX itself,current online mapping services are essentially clever and efficient integrations of existing technologies rather than based on single radically new technology breakthrough. For example, Google maps technology foundation includes:
a. AJAX
Google maps is the archetypal Ajax application and it's 'Slippy map' interface has revolutionized the expectations of online mapping services. Ajax as a technology allows Google maps to function like a desktop application, although in detail, Google maps uses an alternative to XMLhttpRequest to facilitate asynchronous communication with the server. The embedded HMTL page can be refreshed without needing to reload the main page.
b. Map Tiling
Although the Google maps appear to be seamless: it is in fact a mosaic of pre-rendered map tiles. Every tile within the Google maps system, whether map, satellite or hybrid view is pre-prepared. Each tile has dimensions of 256 by 256 pixels and about 12 Kbps in size, which make them individually very quick to download. Each visible Google 'map' is a grid of five or more rows and columns of presently unseen tiles around the edges of the visible area that are loaded in anticipation of the user panning the map. If you move you mouse in a diagonal direction across the screen, tou may be able to see the tiles loading.
c. Map Compression
Google maps data supplier 'Telcontar' stores the map files in a highly compacted proprietary Rich Map Format RMF that aids fast transfer. Yahoo also use the same supplier but the data are rendered differently to look dissimilar.
d. Map Tile Caching
Caching algorithms ensure that data that are most frequently used are positioned so that the can be most quickly retrieved. In Google maps, the most frequently used tiles are the fastest to be recalled. As most users will tend most times to look at particular parts of the world- their city, the locality of their home etc. the map tiles of those areas are likely to already have been stored in the web caches of the users' PC from previous sessions, so Google maps can provide this tiles for subsequent sessions without ever leaving the users' machine. Google takes advantage of the caching mechanism that is built into the web system for its own purposes. Google technologies have improved over the years but having served on a GIS project which Google also participated, I have come to the conclusion that Google do not offer complete GIS services and can best be said to do online mapping service.
What Google spawned was nothing but a great awakening. The advent of Google maps is not the "GIS Killer" or " Killer App", it was a promoter. It was the platform that allowed more people to see the utility of geospatial information. Francica (2008)
GIS is not all about data and information, it is also concerned with the validation of the quality of results produced by the system. It also entrench itself among others in large scale institutional data arrangement, sharing and usage which the present online mapping services providers are presently incapable of doing. For example, they can not afford to invest and operate large scale resource applications across the globe because of the peculiarity of each localities. Large scale land/cadastral, utilities, buildings, city information etc. sensitive to states and local government administration are presently beyond them. Collecting data for location and street searches, low accuracy terrain classification, ability to lay individual data on service providers maps are issues of democratization of data and not core GIS issues.
Serious earth data for earthquake, Sunamis, tremors are not detected or monitored using data from phones or 5 meter accuracy GPS, but with geodetic equipments. They all bother more on data presentation and the usability depends on the target application and use.
I know that opinion differs and diverse opinions converge to bring out facts and embrace logical sentiments. My opinion is that there is an on going convergence of the earth resource fields forcefully pushed by emerging technologies, especially the world-wide-web making all of us to cross what used to be natural boundaries like it is also happening in arts and humanities. However, technology should not just direct our paths out of excitements, it should help in exposing the reasons for the high increase in the frequencies of occurrence of earth problems, earthquakes and all.
"(InternetGIS systems) come very close to delivering the functionality expected by a professional working with GIS, and, providing that response times are sufficiently fast and high-quality printouts are available (through downloadable PDF or post-script files for example, they can replace a desktop GIS for many purposes" Jones, C. B. and Purves, R. S. (2008)
Professional and academic opinions have differed over the years on the place of Geoweb 2.0 standards operators and the improvements made with Geoweb 3.0. This have dragged going to a decade now and then one begins to wonder why the differences despite the acceptance of the deliverables across the globe.
What we have today on Geoweb is data resources seamlessly in plug-and-play mode with other online spatial data sources; online spatial data processing services interoperating distributed databases, chaining together with other processing services to form loosely coupled applications allowing users to query online catalogues to discover the data services and processing the services they need.
Geoweb technically means bending the facilities offered by the web to suit the needs of geospatial data processing, incorporating web-mapping and web based Geo-services.
The emergence of no-cost services such as Google Maps, Yahoo maps, Microsoft's Bing Maps, Virtual Earth, Open Layers etc. has added populist element to the Geoweb. Anyone with the fairly minimal knowledge of computing- Activistss, hobbyists, whoever- can access the Web's rich sources of spatial data and create their own online mapping.
We are in post Geoweb 2.0 era that implements interactive/democratic web/human to machine communications incorporating Wikis, Blogs, Mashups, Volonteered information and AJAX Applications. Google Maps, Yahoo maps, Microsoft's Bing Maps, Virtual Earth, Open Layers and Neogeography are providing Geoweb 2.0 Services. Over a hundred other mapping services are listed today.
Geoweb 3.0 technologies are already being implemented without having a convincing edge while visionaries are already discussing what Geoweb 4.0 will look like.
It's estimated that the release of Google Maps/Earth has increased GIS users (broadly defined) from 1 million to 100 million, Sui 2008.
It is however important that we understand that GIS is a system that solves problems based on the ability to query a built mirror image of life's events. Which means you first model the reality of the events and activities and extract intelligent information and knowledge.
The problem solving ability of GIS means it is a tool and not a science. However, there are several scientific disciplines like physics/electronics, material science, statistics, mathematics, computer science/ programming and geodesy (projection) that provide knowledge to build an efficient systems. Their inputs have brought GIS to where we are today. That helps us to understand were Cache, Apps etc are coming from, making near realtime map generation possible by online mapping services providers the edge over the 1.5 Geoweb applications and taking maps into the door step of interested users. Google and other online mapping services have not done anything beyond serving maps and providing apps for customizing individually motivated deliverables by providing apps and other gadgets.
How Google revolutionized mapping service.
When it was released on 7th February 2005, Google Maps immediately made other online mapping services look desperately old fashioned. Previous mapping systems, based on Geoweb 1.5 approaches required that an entire web page be re-generated each time the user make alterations to an online map. We were used to clicking and waiting. Google now offers what has come to be known as " sloppy map interface. By moving the mouse, users can slide the map around the map window, and unless the map is moved very rapidly, the map seems to be seamless.
Google maps, like other online mapping services, operates within conventional browsers, requiring no plg-ins, no flash etc. it is ultimately a thin client solution. It is therefore important to explore how current mapping services are managed.
The answer is that rather like AJAX itself,current online mapping services are essentially clever and efficient integrations of existing technologies rather than based on single radically new technology breakthrough. For example, Google maps technology foundation includes:
a. AJAX
Google maps is the archetypal Ajax application and it's 'Slippy map' interface has revolutionized the expectations of online mapping services. Ajax as a technology allows Google maps to function like a desktop application, although in detail, Google maps uses an alternative to XMLhttpRequest to facilitate asynchronous communication with the server. The embedded HMTL page can be refreshed without needing to reload the main page.
b. Map Tiling
Although the Google maps appear to be seamless: it is in fact a mosaic of pre-rendered map tiles. Every tile within the Google maps system, whether map, satellite or hybrid view is pre-prepared. Each tile has dimensions of 256 by 256 pixels and about 12 Kbps in size, which make them individually very quick to download. Each visible Google 'map' is a grid of five or more rows and columns of presently unseen tiles around the edges of the visible area that are loaded in anticipation of the user panning the map. If you move you mouse in a diagonal direction across the screen, tou may be able to see the tiles loading.
c. Map Compression
Google maps data supplier 'Telcontar' stores the map files in a highly compacted proprietary Rich Map Format RMF that aids fast transfer. Yahoo also use the same supplier but the data are rendered differently to look dissimilar.
d. Map Tile Caching
Caching algorithms ensure that data that are most frequently used are positioned so that the can be most quickly retrieved. In Google maps, the most frequently used tiles are the fastest to be recalled. As most users will tend most times to look at particular parts of the world- their city, the locality of their home etc. the map tiles of those areas are likely to already have been stored in the web caches of the users' PC from previous sessions, so Google maps can provide this tiles for subsequent sessions without ever leaving the users' machine. Google takes advantage of the caching mechanism that is built into the web system for its own purposes. Google technologies have improved over the years but having served on a GIS project which Google also participated, I have come to the conclusion that Google do not offer complete GIS services and can best be said to do online mapping service.
What Google spawned was nothing but a great awakening. The advent of Google maps is not the "GIS Killer" or " Killer App", it was a promoter. It was the platform that allowed more people to see the utility of geospatial information. Francica (2008)
GIS is not all about data and information, it is also concerned with the validation of the quality of results produced by the system. It also entrench itself among others in large scale institutional data arrangement, sharing and usage which the present online mapping services providers are presently incapable of doing. For example, they can not afford to invest and operate large scale resource applications across the globe because of the peculiarity of each localities. Large scale land/cadastral, utilities, buildings, city information etc. sensitive to states and local government administration are presently beyond them. Collecting data for location and street searches, low accuracy terrain classification, ability to lay individual data on service providers maps are issues of democratization of data and not core GIS issues.
Serious earth data for earthquake, Sunamis, tremors are not detected or monitored using data from phones or 5 meter accuracy GPS, but with geodetic equipments. They all bother more on data presentation and the usability depends on the target application and use.
I know that opinion differs and diverse opinions converge to bring out facts and embrace logical sentiments. My opinion is that there is an on going convergence of the earth resource fields forcefully pushed by emerging technologies, especially the world-wide-web making all of us to cross what used to be natural boundaries like it is also happening in arts and humanities. However, technology should not just direct our paths out of excitements, it should help in exposing the reasons for the high increase in the frequencies of occurrence of earth problems, earthquakes and all.
The Modern Geodesy
The primary mission of Modern Geodesy is the definition and maintenance of precise geometric and gravimetric reference frames and models, and the provision of high accuracy positioning techniques for users in order to connect to these frames.
The International Association of Geodesy (IAG) has established services for all the major satellite geodesy techniques:
International Global Navigation Satellite System (GNSS) Service (IGS);
International Laser Ranging Service (ILRS);
International Very Long Baseline Interferometry Service (IVS);
International Doppler Orbitography and Radio positioning Integrated by Satellite (DORIS) Service (IDS);
International Gravity Field Service (IGFS); and others.
These services generate a wide range of products, including precise satellite orbits, ground station coordinates, Earth rotation and orientation values, gravity field quantities and atmospheric parameters, all of which are vital to the definition of the terrestrial and celestial reference systems. These reference systems are the foundation for all operational geodetic applications associated with mapping and charting, navigation, spatial data acquisition and management, as well as support for the geosciences.
The International Celestial Reference System (ICRS) forms the basis for describing celestial coordinates, and the International Terrestrial Reference System (ITRS) is the foundation for the definition of terrestrial coordinates to the highest possible accuracy. The definitions of these systems include the orientation and origin of their axes, scale, physical constants and models used in their realization, e.g., the size, shape and orientation of the reference ellipsoid that approximates the geoid and the Earth’s gravity field model. The coordinate transformation between the ICRS and ITRS is described by a sequence of rotations that account for variations in the orientation of the Earth’s rotation axis and its rotational speed.
Global Geodetic Observing System
In order to address the ever increasing performance requirements for global change monitoring, the IAG established in 2007 the Global Geodetic Observing System (GGOS).
GGOS’s goal is to coordinate all of the geodetic services and provide high-level products through a single portal. It will also fuel the next revolution in modern geodesy – the unified analysis of all geodetic data through common models – so as to drive an order of magnitude improvement in geodetic accuracy (reference frame stability, quality of geodetic products and models, etc.). In other words, GGOS will promote the development of tools and observing systems to ensure the ITRF can be defined and monitored to millimetre accuracy, with stability at the mm/yr level.
The mission of GGOS is:
(1) to provide the observations needed to monitor, map and understand changes in the Earth’s shape, rotation and mass distribution;
(2) to improve the quality of the global reference frames so that they may provide the fundamental backbone for measuring and interpreting key global change processes; and
(3) to support a variety of applications in geoscience and society for precise positioning, gravity field mapping and modeling.
The resultant improvement in reference frame accuracy and stability will not only benefit critical scientific studies such as measuring sea level rise, but also will provide a stronger framework for precise (centimeter-level) GNSS-enabled positioning in national, regional or global datums.
At a practical level the integration of the outputs of all the IAG services implies a coordinated upgrade of the ground station infrastructure (the stations in the IGS, ILRS, IVS and IDS networks); an increase in the number of co-located stations of the different space geodesy techniques; and steady improvement and continuity of a number of critical geodetic satellite missions (altimetric, GNSS, gravity mapping, etc). Under the GGOS initiative increased investment in geodetic infrastructure such as GNSS permanent ref- erence stations is strongly encouraged.
Reference Frames in Practice Manual Commission 5 Working Group 5.2 Reference Frames
Editor: Graeme Blick
INTERNATIONAL FEDERATION OF SURVEYORS (FIG)
The International Association of Geodesy (IAG) has established services for all the major satellite geodesy techniques:
International Global Navigation Satellite System (GNSS) Service (IGS);
International Laser Ranging Service (ILRS);
International Very Long Baseline Interferometry Service (IVS);
International Doppler Orbitography and Radio positioning Integrated by Satellite (DORIS) Service (IDS);
International Gravity Field Service (IGFS); and others.
These services generate a wide range of products, including precise satellite orbits, ground station coordinates, Earth rotation and orientation values, gravity field quantities and atmospheric parameters, all of which are vital to the definition of the terrestrial and celestial reference systems. These reference systems are the foundation for all operational geodetic applications associated with mapping and charting, navigation, spatial data acquisition and management, as well as support for the geosciences.
The International Celestial Reference System (ICRS) forms the basis for describing celestial coordinates, and the International Terrestrial Reference System (ITRS) is the foundation for the definition of terrestrial coordinates to the highest possible accuracy. The definitions of these systems include the orientation and origin of their axes, scale, physical constants and models used in their realization, e.g., the size, shape and orientation of the reference ellipsoid that approximates the geoid and the Earth’s gravity field model. The coordinate transformation between the ICRS and ITRS is described by a sequence of rotations that account for variations in the orientation of the Earth’s rotation axis and its rotational speed.
Global Geodetic Observing System
In order to address the ever increasing performance requirements for global change monitoring, the IAG established in 2007 the Global Geodetic Observing System (GGOS).
GGOS’s goal is to coordinate all of the geodetic services and provide high-level products through a single portal. It will also fuel the next revolution in modern geodesy – the unified analysis of all geodetic data through common models – so as to drive an order of magnitude improvement in geodetic accuracy (reference frame stability, quality of geodetic products and models, etc.). In other words, GGOS will promote the development of tools and observing systems to ensure the ITRF can be defined and monitored to millimetre accuracy, with stability at the mm/yr level.
The mission of GGOS is:
(1) to provide the observations needed to monitor, map and understand changes in the Earth’s shape, rotation and mass distribution;
(2) to improve the quality of the global reference frames so that they may provide the fundamental backbone for measuring and interpreting key global change processes; and
(3) to support a variety of applications in geoscience and society for precise positioning, gravity field mapping and modeling.
The resultant improvement in reference frame accuracy and stability will not only benefit critical scientific studies such as measuring sea level rise, but also will provide a stronger framework for precise (centimeter-level) GNSS-enabled positioning in national, regional or global datums.
At a practical level the integration of the outputs of all the IAG services implies a coordinated upgrade of the ground station infrastructure (the stations in the IGS, ILRS, IVS and IDS networks); an increase in the number of co-located stations of the different space geodesy techniques; and steady improvement and continuity of a number of critical geodetic satellite missions (altimetric, GNSS, gravity mapping, etc). Under the GGOS initiative increased investment in geodetic infrastructure such as GNSS permanent ref- erence stations is strongly encouraged.
Reference Frames in Practice Manual Commission 5 Working Group 5.2 Reference Frames
Editor: Graeme Blick
INTERNATIONAL FEDERATION OF SURVEYORS (FIG)
The limitation of Technology
2014 came and has gone into history like so many others that have scaled and influenced our earth. There was a lot of lessons learnt but of note to the Geoinformation community is the Malaysian airline plane (MH 370) that disappeared and was presumed crashed into the Indian Ocean.
Yet as the disappearance of Malaysia Airlines flight 370 made clear, knowledge and access to information can be deceptive. After months of relentless searching, involving more than 40 ships and 39 aircraft from 12 countries, we know little more about the missing Boeing 777 than we did on the day it disappeared.
The sea, it turns out, isn't ours for the taking. Even if MH370 is eventually found, it has already reordered our views about geography, data and monitoring. Radar, computer tracking, X-rays, satellite imagery, listening apps: All have significant limits.
According to Wikipedia, the current phase of the search is a comprehensive search of the seafloor which began in October 2014; but preceded by a bathymetric survey of the search area since May 2014.
The two projects are expected to make up to 12 months to complete at a cost of AU$ 52m. The search has revealed the inadequacy of current technologies to collect data at the depths along the search route and as such have delayed the search work.
Some of the equipment to be used for the underwater search operates best when towed 200 m (650 ft) above the seafloor and is towed at the end of a 10 km (6 mi) cable. The water depth within the Indian Ocean ranges between 3,700 and 23,000 feet. Available bathymetric data for this region was of poor resolution, thus necessitating a bathymetric survey of the search area before the underwater phase began.
The underwater phase of the search uses three vessels equipped with towed deep water vehicles, which use side-scan sonar, multi-beam echo sounders, and video cameras to locate and identify aircraft debris.
As of 17 December 2014, over 11,000 km2 (4,200 sq mi) of seafloor has been searched and the bathymetric survey has mapped over 200,000 km2 (77,000 sq mi) of seafloor. Without significant delays, the priority underwater search area will be completed around May 2015.
(CNN) -- Just how hard is it to find a plane at the bottom of the ocean?
Imagine standing on a mountain top and trying to spot a suitcase on the ground below. Then imagine doing it in complete darkness. That's basically what crews searching for Malaysia Airlines Flight 370 have been trying to do for a month.(for months).
An unsolved mystery of such proportions, which defies our collective knowledge and best technology, is a shock to the system. It's as if our old concept of the unfathomable sea has suddenly been restored, forcing us to re-evaluate what we know and what can be known. How could 239 people disappear without a trace? All we know is that the answer almost certainly lies somewhere beneath the deep blue sea.
Though the oceans make up 70 percent of the planet’s surface, only about 5 percent has been mapped, which leaves about 65 percent of the world uncharted and unknown. The ocean is the last frontier of human empirical knowledge; even the contours on that eighth-grader’s globe are the product of a mix of scientific measurement, inference and conjecture.
Yet even Google’s new app for viewing ocean bottoms is limited by how little is documented there. It's a crucial gap when trying to find a lost plane. The Guardian in March quoted Malaysia’s acting transport minister saying the MH370 search area covered 2.24 million square nautical miles -- large enough to contain almost two billion Boeing 777s, encompassing about 1.5 percent of the world.
Finding the plane is daunting. Bringing it back from the deep will be even more difficult.
"At these depths ... there's no recovery like it," said Mary Schiavo, a former inspector general for the U.S. Department of Transportation.
References
- MH370 Search Revives Age-Old Mystery In The Indian Ocean
By Alan Huffman
@alanhuffman1 a.huffman@ibtimes.com (www.ibtimes.com)
- How deep is deep? Imagining the MH370 search underwater
By Holly Yan and Ed Lavandera, CNN. (www.edition.cnn.com )
- Wikipedia
Yet as the disappearance of Malaysia Airlines flight 370 made clear, knowledge and access to information can be deceptive. After months of relentless searching, involving more than 40 ships and 39 aircraft from 12 countries, we know little more about the missing Boeing 777 than we did on the day it disappeared.
The sea, it turns out, isn't ours for the taking. Even if MH370 is eventually found, it has already reordered our views about geography, data and monitoring. Radar, computer tracking, X-rays, satellite imagery, listening apps: All have significant limits.
According to Wikipedia, the current phase of the search is a comprehensive search of the seafloor which began in October 2014; but preceded by a bathymetric survey of the search area since May 2014.
The two projects are expected to make up to 12 months to complete at a cost of AU$ 52m. The search has revealed the inadequacy of current technologies to collect data at the depths along the search route and as such have delayed the search work.
Some of the equipment to be used for the underwater search operates best when towed 200 m (650 ft) above the seafloor and is towed at the end of a 10 km (6 mi) cable. The water depth within the Indian Ocean ranges between 3,700 and 23,000 feet. Available bathymetric data for this region was of poor resolution, thus necessitating a bathymetric survey of the search area before the underwater phase began.
The underwater phase of the search uses three vessels equipped with towed deep water vehicles, which use side-scan sonar, multi-beam echo sounders, and video cameras to locate and identify aircraft debris.
As of 17 December 2014, over 11,000 km2 (4,200 sq mi) of seafloor has been searched and the bathymetric survey has mapped over 200,000 km2 (77,000 sq mi) of seafloor. Without significant delays, the priority underwater search area will be completed around May 2015.
(CNN) -- Just how hard is it to find a plane at the bottom of the ocean?
Imagine standing on a mountain top and trying to spot a suitcase on the ground below. Then imagine doing it in complete darkness. That's basically what crews searching for Malaysia Airlines Flight 370 have been trying to do for a month.(for months).
An unsolved mystery of such proportions, which defies our collective knowledge and best technology, is a shock to the system. It's as if our old concept of the unfathomable sea has suddenly been restored, forcing us to re-evaluate what we know and what can be known. How could 239 people disappear without a trace? All we know is that the answer almost certainly lies somewhere beneath the deep blue sea.
Though the oceans make up 70 percent of the planet’s surface, only about 5 percent has been mapped, which leaves about 65 percent of the world uncharted and unknown. The ocean is the last frontier of human empirical knowledge; even the contours on that eighth-grader’s globe are the product of a mix of scientific measurement, inference and conjecture.
Yet even Google’s new app for viewing ocean bottoms is limited by how little is documented there. It's a crucial gap when trying to find a lost plane. The Guardian in March quoted Malaysia’s acting transport minister saying the MH370 search area covered 2.24 million square nautical miles -- large enough to contain almost two billion Boeing 777s, encompassing about 1.5 percent of the world.
Finding the plane is daunting. Bringing it back from the deep will be even more difficult.
"At these depths ... there's no recovery like it," said Mary Schiavo, a former inspector general for the U.S. Department of Transportation.
References
- MH370 Search Revives Age-Old Mystery In The Indian Ocean
By Alan Huffman
@alanhuffman1 a.huffman@ibtimes.com (www.ibtimes.com)
- How deep is deep? Imagining the MH370 search underwater
By Holly Yan and Ed Lavandera, CNN. (www.edition.cnn.com )
- Wikipedia
Subscribe to:
Posts (Atom)