Urban Transport News – 24 April 2019 – by Vinod Shah
At what stage does a computer make the crossover into Artificial Intelligence, why is it important, and what benefits could it bring to Infrastructure? That’s linking two of the biggest challenges we are currently faced with – the tremendous strides being made by computing technology and the huge changes that will affect infrastructure over the next decade or so and how we handle them.
The promise of artificial intelligence is exercising people in many different industries, from technology to health care, marketing and sales, and the legal industry. It is being used to add value to data processing and industry workflows and is marked by its ability to go beyond the mechanical application of complex algorithms by reasoning with the data.
Applied to Reality Modeling, the advances have been used to improve decision making by providing more accurate, real-world digital context with all of the information that stakeholders need to design, construct and operate infrastructure assets. What that means is that computers are now able to recognize and classify images more accurately, and, doing so, compare them with similar objects within the same category. Being able to recognize minor differences between them, they can pick up discrepancies or defective elements – and thus initiate corrective actions.
All possible thanks to a process called deep learning, which is crucial in the development of AI for users of reality modelling applications, and a catalyst for the true digitalization of the infrastructure industry. Deep learning, a subset of machine learning, occurs when a computer acts like the human brain with multiple layers of artificial neurons helping that process. Researchers train these very large deep learning neuro-networks to do all kinds of things, such as the aforementioned image and feature recognition, object detection, and language processing.
Seeing Structural Defects
Reality Modeling applications using deep learning can be used to overcome many industry challenges, making it possible for computer vision and image recognition to identify problems with structures or individual pieces of equipment before they become critical. To give an example, organisations have used this Reality Modeling to detect faults in concrete, or to identify cracks in a structure, highlighting the severity of the problem. By identifying and segmenting a crack, engineers can figure out its exact shape, size, and the scale of the crack, and associate it with other pieces of critical information to formulate a solution.
To provide further examples, the technology enables computers to identify multiple objects like humans or trees, with similar reliability, in a single image, or to highlight objects by creating a boundary around the exact shape of the object in 2D or 3D, enabling it to be removed from the scene – for example telegraph poles from a landscape view (the software subsequently filling the vacant space with a matching extract from the surrounding terrain).
More pertinently for Infrastructure issues – and something you may have seen recently yourself – is the ability of some autonomous vehicles to isolate and identify objects in front of them, supplementing the street data they already have. Using AI it can learn to identify the typical behaviour of other cars or pedestrians, such as trajectory, distance and speed of approach, and take appropriate steps to avoid it.
In comparison, we see examples of machine learning every day, as most smartphone cameras can identify faces and focus in on them and perform a number of similar actions that may appear to be intelligent, but, in reality, are merely the end result of some rather clever programming.
Bentley’s White Paper, then, differentiates between AI, machine learning and deep learning. They are all interconnected, but AI is the most generic and currently the most popular term, applying to when a computer does something ‘smart’ or reasons with the data. This ability to reason, Bentley argues in the White paper, is what distinguishes AI from other types of computer programming. Programming a computer to speak is not AI; however, if you were to program the computer to create speech and to understand its meaning, that process would be AI.
Having trained the artificial neural network to recognise an object, by subjecting it to many images of similar objects in different attitudes, it can then be used to select all instances of such objects. An example from the white paper outlines how this was utilised during a CH2M Fairhurst project in Europe. The project team wanted to design and create a 3D model for an upgraded road, a model without trees on either side of the road, as they were planning to widen the road to add more lanes. Team members also wanted to create a new surrounding landscape, so they needed to remove the trees to better visualise their options. To do this, a dataset was given to the AI research team, who first classified the trees on both sides of the road and then removed them in one pass, instead of having to select and manually remove them one at a time.
Another example of the use of AI was provided by Skand Pty Ltd who recently used Reality Modeling for its project at the Royal Melbourne Institute of Technology (RMIT). Located in Melbourne, Victoria, Australia, the university wanted to integrate drone imagery and analysis into its award-winning forty-year asset lifecycle program. Starting with the university’s Brunswick campus, Skand used a drone to capture images of the 65,000-metre site. It then used a web program to incorporate the information into RMIT’s existing building envelope project inspection brief, turning the 2D images into meaningful datasets mapped to a 3D reality model. The program used computer vision and machine learning to identify and categorise defects, such as cracks, moss, algae, bird nests and other forms of corrosion and building material degradation.
Being able to integrate drone imagery into RMIT’s asset lifecycle program helped Skand deliver a superior quality of model and mapping of defects for better and safer asset maintenance planning, as roof and façade inspections could be carried out without leaving the ground. By combining machine learning and Reality Modeling applications with 3D visualisation with a single service reporting point, Skand were able to create a more cost-effective and time-saving maintenance solution – some 60% cheaper than traditional inspection methods.
We are in the earliest of stages in the use of computer vision assisted by AI, and Bentley put forward a number of these in their white paper, such as the ability to classify images in reality meshes, and to use neural networks that have already learned objects from other images and automatically detect them in the future.
Even more interesting is the suggestion that AI could be used to improve the technology itself and the user’s experience, learning how it can be leveraged to maximize the value of Reality Modeling and improve productivity. As part of this, Bentley has announced its Early Access Program for ContextCapture Insights, a Reality Modeling solution that automatically detects and locates objects using 3D machine learning technology. It provides an automated solution to help reduce time and costs associated with the analysis of real-world conditions from real data.
Architects and engineers create 3D models to provide better visibility into a project’s progress and end-goals. Using Reality Modeling applications to create the models could accelerate the design process whilst keeping everyone informed of changes.
The integration of AI into Reality Modeling will revolutionize the infrastructure industry – whilst the human brain is still quite capable of working out that there is no limit to what the technology can do.
View original article at urbantransportnews.com