Glimpses and Challenges of Computer Modelling in Civil Engineering-Juniper Publishers
Juniper publishers Journal o
Abstract
The study of natural phenomena or major environmental
problems usually requires the analysis of interdisciplinary data
acquired from increasingly powerful sensors to achieve sustainability.
This almost always results in complex interactions of hard mathematical
modeling and the adoption of holistic system views. In many cases, the
initial conditions are not known, the domain is not well defined and the
constitutive parameters are heterogeneous generating large
uncertainties. For this reason, knowledge-based models evolved from huge
amounts of data have generated enormous interest to understand and
solve such phenomena that do not always find suitable mathematical laws
for their representation. So, the purpose of this communication is to
briefly discuss the technical and challenges on data processing and
engineering and give a glimpse on the scientific development and future
direction of this domain.
Introduction
The systems modeling goal is to find and formulate
laws that govern phenomena in a mathematical and precise way. However,
it is recognized that such perfect descriptions are not always possible.
Lockwood [1], Hansson [2] & Fagerstrom [3].
Incomplete and imprecise knowledge, qualitative observations and great
heterogeneity of many interacting agents usually cause uncertainties
when modeling complex phenomena of the ever changing nature or in the
business world. It is recognized that knowledge is kept by experts or
stored in data Gaul & Schader [4].
For organizations, the most important is to understand the great
dynamism of the natural systems or the competitive environment in which
they are inserted to make decisions about the sustainability and the
responsiveness of their decisions. Therefore, information, knowledge and
data are preemptive.
Sensors and Data
Talking about smart phones, tablets, notebooks, and
connected objects, one need to understand that these "things" are full
of sensors. A forecast shows it could exceed one trillion as early as
the early 2020s according to estimates from various companies and
industries attentive in these flows. The data is observed, stored and
released dynamically, which requires that science be online. Thus, to be
able to make accurate models and update themselves in dynamic
competitive environments, it is increasingly necessary to develop
methodologies to generate current and adaptive forecasts for decision
making. On the other hand, the cost reduction of electronic sensors
applied in environmental monitoring allowed data to be collected and
used to build models in real time Rolph et al. [5]; Pereira et al. [6]; Sanchez-Rosario et al. [7].
These models use a continuous flow of data and there
is no control over the order of arrival of each element to be
processed. Flow data has unlimited size, once processed, an element is
usually discarded. It is not recovered unless it is stored in memory,
which is usually small for the size of the data received. Queries about
these flows need to be processed in real time (for a real world event or
because it is expensive to store the data). These methodologies should
be able to extract knowledge in huge amounts of structured and
unstructured data from in situ monitoring, regulations and web, and add
it to existing knowledge. In the models generation, one can rely on
statistical methods, machine learning and computational methods inspired
in nature as presented by Fairbairn et al. [8]
in a civil engineering application. These models have great robustness
because they are noise tolerant and can be coupled to other models
providing hybrid solutions.
Scientific and Data Intensive Computing
It should be reappraised the use of high performance
computers and database technology which has caused significant changes
in the development of design techniques and programming of algorithms
for engineering problem solving. In the case of scientific computation
of large systems, the different ways in which numerical methods are
designed or adapted stimulated research in technical and applied
aspects. New, scalable computing architectures (Big Data, NOSql
databases) have boosted database technology, allowing simulation of real
problems with great consistency and accuracy. Big Data refers to data
that is too big to fit on a single server, too unstructured to fit into a
row-and-column database, or too continuously flowing to fit into a
static data warehouse.
The adopted methodology implies the integrated use
of several technologies and can be briefly described by the following
activities:
a. Acquisition of the database containing the
relevant parameters. For models development and subsequent simulation,
the availability of reliable data is indispensable.
b. Expert selection and data analysis and data selection for model construction, test and validation;
c. Selection of the data knowledge representation technique and acquisition of specialized knowledge, if necessary;
d. Construction, Testing and validation of the model;
e. Generalization for different configurations;
f. Analysis of results and conclusions.
The main characteristics of traditional Scientific
Computing model and Intensive and Distributed Data Computation model are
resumed in Table I.

Both represent computing strategies, but the best
and most efficient one depends on the situation. Large-scale data
processing in the distributed system is very challenging in many
concepts of the system performance for reliability. Typically, the MPI
(Message Passing Interface) supports a more flexible communication
method than MapReduce (asynchronous versus synchronous). While MPI Gropp
et al. [9] moves the data during communication, MapReduce Lammel [10]
uses the concept of "data locality", making the transmission between
CPU and disk possible once, this task, cannot be performed in MPI
because it requires that data "be processed" should fit in memory (Core
Processing).
Research Challenges and Development
The use of large volumes of data will become a
fundamental competition and growth for individual companies. From the
point of view of competitiveness and value-capture potential, all
companies need to fit large volumes of data. In most industries,
competitors and new entrants will use strategies to innovate, compete
and capture value, based on information of depth and in real time.
Parallel processing, clustering, virtualization, large environments,
high connectivity and cloud computing, as well as other of flexible
resources, are enabling organizations to take advantage of the Big Data
and big data analytics.
The opportunities of the future will be greater and
more difficult to be solved, to the point of challenging our capacity
for imagination. Quantum computers will introduce a new era in
computing. Quantum systems will help us to find new ways to model
financial data and isolate key global risk factors to make better
investments. And they may make facets of artificial intelligence such as
machine learning much more powerful. The response and decision making
will be very demanding with our ability to respond. We are approaching a
moment of life or death. Firms that act, will thrive, those that will
not, will disappear. The future lies in innovation. In the convergence
of Bio-Cogno- Info-Nano Ebecken [11]. In the mining of scientific trends for innovation.
For more open access journals please visit our site:Juniper Publishers
For more articles click on: Civil EngineeringResearch Journal
Comments
Post a Comment