Testing ICT systems represents one phase of any project, but also a measure
of quality assurance for products and services of ICT technology. Unlike other
industrial testing system tests are performed at three different levels and at
the level of meeting the technical specifications for hardware, functional and
technical requirements for software and system behavior in different
environments and boundary conditions. Also, as a special type of testing is
examined to prove the reality.
Most often when testing using best industry practices, but often it is
necessary to expand testing beyond those areas that do know the industry or by
the system designer had the idea for the system under the given conditions and
in order to give way. This is the case whenever the system is operating outside
the border, that is, when manufactured and installed the system designer or
exceeds the expectations of the contracting authority so users are discovering
the many unforeseen ways to use the system.
Standard testing system covers borderline cases, performance or design
conditions do not predict, inputs and methods of use and is usually carried out
for the devices directly over them while the large and spatially distributed
systems commonly tested model of system in which
it is very important to model as closely as possible represents your system.
Tests are carried out so that collectively cover all the complex
specifications of the system as well as all possible outcomes, with the number
of different tests limited to a reasonable level so that the new testing
methods are introduced, and only if the existing features of there is a possibility of a new outcome by applying a
given test - there is no need to prove what has already been proven.
It is worth mentioning that the tester examines and proves the reality and that
the tests conducted on the given mathematical models cannot be more than the
accuracy of the accuracy of the model, so that every inconsistency or error in
the model automatically leads to errors in testing.
Perhaps the best recommendation for the testers to be professional skeptics
that do not believe none of the pictures, scheme
or information obtained from the developers of system,
operatives or owners of the system. One of the reasons why you should be
skeptical of everything is a phenomenon that is often necessary to simplify the
ideas that is being implemented in order to fit the same in certain technical
standards, and that could easily have presented to those who work with them
should be introduced only at the level of ideas, not to possess a deeper
knowledge in the field. Testers then have to measure and verify the reality and
believe only the data that provoke real system, and that the evaluation of results trying to keep it a stronger critical line.
Simplification as we have already noted the problem drains, because the
data obtained from the real system differ significantly from those obtained
from the model.
Special characteristics of the testing system is the fact that no one not
even the designers nor testers do not know
everything about what and how system works. This is because the system has
wider limits than those observe. In this sense there is a problem that is part
of the system must be observed and measured, and which do not, or that are
relevant to the characteristics and the relevant parts of the system interdependent relationships.
The question that arises when there are some discrepancies and anomalies system is whether these problems are important for the
whole system or local, that affect other elements of the system, that is
transmitted and how to escalate. Since most professionals in the ICT sector
have no education from systems theory most local problems is considered isolated
and local, and the fact that the system each element system
affects every other element system and the system
as such is often forgotten or consciously disregarded whereby only the question
whether the accumulated small problems escalate enough to threaten the
functioning of the whole system.
In this respect it should be noted that in theory all the of system has a context. This means that testers must
evolve context before testing the system. This is particularly important when
testing software on the extra functionality, ie, the gray area of application,
device or software when working outside the projected value. The unwritten
conditions require the use of checks and those conditions that were not
originally planned in the project. In this way, the examination of real
solutions expands the area in which the reliability of the device or software
can be used. This type of test has so far been mainly reserved for the software
components of system, but rarely for
infrastructure, but due to the rapid growth of infrastructure and continuous
overload due to the accelerated growth of the services that use the same this
kind of testing has become a very interesting and in the area of
infrastructure.
In this sense it is interesting to check the growth scenarios, which can
easily be described as an attempt to understand how the project will fall apart
if exaggerate any of the elements of system, ie,
in this particular case the network infrastructure to fall when the network
appears too many simultaneous services that attempt to benefit at the same time
(the problem of lack of capacity). In this way, testing the system becomes a engine for constant deep learning about the system.
Testing system must not be so expensive that company or
institution does not want to do them, but not so cheap that due to the
fluctuation of knowledge on testing techniques become random. Therefore, it is
important to note that many so-called automated testing technically are not
testing but only characteristics proof .
Automated tests cannot be aware of the emergent problems,
especially if it occurs in the time between two automatic testing. In this
regard it is important to always perform system testing and application of
human intelligence, because at the moment there is no automated software system
or artificial intelligence service enough complex, precise and accurate which
could meet the requirements of automated testing, while still maintaining
critical awareness.
Also, control of the work of testers is particularly difficult in scripted
tests. However, tools such as he Panaya Scenario Recorder allow you to create
appropriate test scenarios that could be automatically used to replace human
testers work in certain areas of freeing people for the necessary training in
the field of testing. Testers need to stay sharp in order to properly do their
job.
If we look at the work of testers in the ICT industry will notice that in
this industry testing almost everything, every product or service, software,
hardware, system elements, but never carried out extensive testing by full
global infrastructure utilization. Although there is a very large number of
tests that can consistently implement the elements of Internet infrastructure
nobody has ever conducted a real full scale crash test of
Internet. Due to the size and complexity, as well as countless technical
redundancy of the parts of the software and hardware infrastructure, there is a question whether this is feasible. But how,
not only our economy, government and legal system depends very much on the
Internet, but you can also say that most of the lower social infrastructure
(companies, families, associations, cities, personal connections) directly and
inextricably linked to a large degree dependent on the Internet raises the
question is whether such a test is necessary and what would be able to learn
from it and learn.
If we agree that it is necessary, at least in terms of determining the
limits of resistance system it is necessary to
re-envision how to make this test work conducted in such a way that the
consequences that its implementation and enforcement are minimized. Certainly
knowing the limit of resistance system would help
to more clearly the need for redundancy institutions and develop procedures for
disaster recovery. The development of these procedures is necessary and it can
be said is critical to minimize the effects of disasters, but also crucial for
the operational management of the global network in the case until the fall of
the Internet in its entirety and come.
Since it is difficult to assume that the planet could engage a natural
disaster that completely destroyed the infrastructure of the Internet, but at
the same time was such that the human race is able to survive come to the
conclusion that the only likely scenario in this case the operation of
malicious individuals or groups united around this objective.
Of course, such an ambitious goal is hardly feasible without a huge
technical and human resources required to implement such a project at work. And
if there is a group with this aim it is clear that this is a terrorist group or
as likely cyber power some of the countries that have this kind of security
forces.
The fact that from year to year on the
Internet appears more and more malicious software, that frequent destructive
attacks the thesis that in the foreseeable future possible hypothetical
scenario in which to let the militant group could decide to attempt to
overthrow the Internet in its entirety. Of course, the question is what would
be motivated, and by what means and techniques make this attempt was
implemented, but it is obvious that the probability of such an event is growing
in the future, so it is not illogical to assume that it will nor be some kind of global testing
the Internet in the future which would could be implemented at the level of
physical and logical infrastructure.
For now, there are a number of ways the Internet as a global verification
system using a simulation model or on their characteristics and methods of
verification largely silent, mostly for security reasons, as confirmed
successful scenarios are nothing but possible plans for attacks. In addition to
include the observation of the real behavior may simplify the infrastructure of
these models require considerable processing power, and the results obtained by
them are not the most reliable.
Another way to check the resistance of a global network of testing for
reduced size, but such testing
also have a lower degree of complexity, ignoring the synergetic effects that
increase with the size.
In general it can be said that the inquiries into real resistance
indispensable types of Internet checks for a global infrastructure, but also
that due to the size and complexity of the same, is not possible in reality,
however, resorted to testing models that give only a vague and approximate
picture of the behavior of the global network conditions close to the fall of
the network.
Although extremely significant hopefully give us this
kind of testing data collected will never need in reality and that the models
generated by consolidation and evaluation remain the same only the necessary
security protocols that we will never need to use this data in order recovered
global network from disaster of such proportions.
Нема коментара:
Постави коментар