South African Computer Journal - Volume 2004, Issue 33, 2004
Volumes & issues
Volume 2004, Issue 33, 2004
Source: South African Computer Journal 2004, pp 1 –9 (2004)More Less
This paper presents a software reliability growth model (SRGM) for classification of software faults during testing phase based on a non-homogeneous Poisson process (NHPP). The model assumes that the testing phase consists of three processes namely, failure observation, fault isolation and fault removal. The software faults are classified into three types namely, simple, hard and complex according to the amount of testing-effort needed to remove them. The removal complexity is proportional to the amount of testing-effort required to remove the fault. The testing-effort expenditures are represented by the number of stages required to remove the fault after the failure observation or fault isolation (with delay between the stages). The time delay between the failure observation and the subsequent fault removal is assumed to represent the severity of the fault. The more severe the fault, the more the time delay. The fault is classified as simple if the time delay between failure observation, fault isolation and fault removal is negligible. If there is a time delay, it is classified as a hard fault. If the removal of a fault after its isolation involves an even greater time delay, it is classified as a complex fault. Therefore, the model incorporates a logistic learning-process function during the removal phase of hard and complex faults. Accordingly, the total fault removal phenomenon is the superposition of the three processes. The model has been validated, evaluated and compared with the well-established NHPP models by applying them to actual software reliability data sets cited from real software development projects. The results are fairly encouraging in terms of goodness of fit, predictive validity and software reliability evaluation measures.
Source: South African Computer Journal 2004, pp 10 –23 (2004)More Less
Axial line placement is a computational geometry problem with direct applications to Space Syntax Analysis, a technique used in the analysis of building and city layouts. While it has been shown that the general axial line placement problem is NP-complete, polynomial time solutions have been found for several restricted versions of the problem. Of these, the case of urban grids is the most applicable to space syntax. Urban grids are polygons that can be used to represent some real-world layouts, but are relatively restricted in their modelling power. The concept of a deformed urban grid was therefore introduced in an attempt to find a more flexible structure, while retaining the grid-like nature of urban grids. It was originally conjectured that this restricted nature of deformed urban grids would allow for an exact polynomial time solution of the problem. However, this article presents a proof showing that the axial line placement problem for deformed urban grids is NP-complete. As this result holds for grids with relatively little deformation, it seems likely that urban grids are the most general input instance for which exact polynomial time solutions can be found. The development of good heuristic solutions to more general instances of the problem will therefore be crucial in the automation of space syntax.
Source: South African Computer Journal 2004, pp 24 –37 (2004)More Less
We discuss the design and development of GRIFFIN, a tool for the automated processing of generalized random context picture grammars (grcpgs) and iterated function systems (IFSs) that have been converted to grcpgs. GRIFFIN was initially developed to empirically verify recent research results that show a relationship between grcpgs and IFSs. It has subsequently evolved to facilitate further research into grcpgs and to assist university students in studying grcpg theory. Its design was challenging, due to a diverse set of requirements: the inherent complexities of grcpgs imposed the need for fast, automated application of grammar rules, while flexibility requirements demanded that the user be able to control the direction in which processing develops. GRIFFIN also needed to support a wide range of IFS functions, which are not normally found in grammar processing tools. Furthermore, it needed to be designed in a way that made it easy to maintain and extend. This paper presents the approach adopted to overcome these difficulties. We also confirm the success of this approach by presenting examples of the tool's use in grcpg research.
Semantics, implementation and performance of dynamic access lists for TCP/IP packet filtering : research articleAuthor S. HazelhurstSource: South African Computer Journal 2004, pp 38 –51 (2004)More Less
The use of IP filtering to improve system security is well established, and although limited in what it can achieve has proved to be efficient and effective. In the design of a security policy there is always a trade-off between usability and security. Static access lists make finding a balance particularly stark. Dynamic access lists would allow the rules to change for short periods of time, and to allow local changes by non-experts. The network administrator can set basic security guide-lines which allow certain basic services only. All other services are restricted, but users are able to request temporary exceptions in order to allow additional access to the network. These exceptions are granted depending on the privileges of the user. The paper presents and justifies a semantics for dynamic access lists. An efficient method of implementing the dynamic semantics is proposed and experimentally validated. The experiments show that a useful dynamic semantics can be implemented with small memory costs and modest time costs.
Source: South African Computer Journal 2004, pp 52 –66 (2004)More Less
The last decade saw a renewed interest in the field of robotics research and a shift in research focus. In the eighties and early nineties, the focus of robotic research was on finding optimal robot architectures, often resulting in non-cognitive, insectlike entities. In recent years, the processing power available to embedded autonomous agents (robots) has improved and this development has allowed for more complex robot architectures. The focus has shifted from single robot to multi-robot teams. The key to the full utilisation of multi-robot teams lies in coordination. Although a robot is a special case of an agent, many existing multi-agent coordination techniques could not be directly ported to multi-robot teams. In this paper, we overview mainstream multi-robot coordination techniques and propose a new approach to coordination, based on models of organisational sociology, namely social networks. The social network based approach relies on trust and kinship relationships, modified for use in heterogeneous multi-robot teams. The proposed task allocation mechanism is then tested using two approaches: the multi-robot team task allocation simulation and a more realistic coordination problem in simulated robot environments. For the purpose of these two tests, two robotic simulators were developed. The social networks based task allocation algorithm has performed according to expectations and the obtained results are very promising. Although it is applied to simulated multi-robot teams, the proposed coordination model is not robot specific, but can also be applied to any multi-agent system without major modifications.
Source: South African Computer Journal 2004, pp 67 –76 (2004)More Less
Input via the keyboard can be slow and tedious for many computer users, but the problem is particularly severe for those with motor disabilities. Strategies that reduce the number of keystrokes required can help limit the problems these users face. In a programming environment, word prediction is a highly successful strategy for saving keystrokes. The use of measures such as the recency and repetitiveness of words can be used to guide the prediction process to a 40% keystroke saving. However, these measures ignore information about the program structure. The goal of this study was to test whether making use of knowledge of the syntax of a programming language can effectively assist these statistical prediction strategies. The study was conducted by inputting Pascal program code into two simulated predictive program editors. One simulator used only statistical prediction, whereas the other included the syntactical approach. The average savings were compared by performing a paired sample means t-test. The results show that inclusion of syntactic information of the Pascal programming language can account for a further 3% increase in the number of keystrokes saved and a further 9% increase in the accuracy of predictions.
Using organisational safeguards to make justifiable privacy decisions when processing personal data : research articleAuthor M.S. OlivierSource: South African Computer Journal 2004, pp 77 –87 (2004)More Less
Privacy-enhancing technologies can be used to enhance the privacy of individuals who interact with information processing systems. This paper considers such technologies that can be used by the organisation to safeguard personal information it processes. The paper focuses on how access control could be used to protect the individual against misuse of personal data inside the organisation. More specifically the paper considers how such a privacy-enhancing technology can make a just choice when deciding whether an access request to personal data should be allowed or not.
Access control decisions in this paper are based on the regulations that govern the interaction, the organisational policies that apply and the individual's privacy preferences.
The proposed model forms part of the organisational safeguards layer (OSL) of the Layered Privacy Architecture (LaPA) proposed earlier.
Learning from information systems failures by using narrative and ante-narrative methods : research articleSource: South African Computer Journal 2004, pp 88 –97 (2004)More Less
We see, know and experience information systems development failures in many domains and in many countries. This paper will explore some of the issues related to the study of these failures. Every year, billions of dollars are wasted on failed projects. The paper will emphasise the fact that the study of failures can only take place post-hoc, once a failure has been identified. Preparation is therefore different to normal scientific study where a situation is pre-selected in advance, the precise parameters are identified and decisions are made about the best methods for measuring them accurately and objectively. The literature reveals that researchers and practitioners have been experiencing projects failures for many years. Indeed, acknowledgements of failures go back at least thirty-six years. However, failures are still a prevalent problem. A significant obstacle related to the study of failures is the lack of acknowledged research methods for understanding such complex phenomena. The evidence collected during failure investigations emerges from a variety of sources, perspectives and contexts. Not surprisingly, it often appears to be ambiguous, incoherent and confused. The information collected tends to be rich, messy, contradictory and subjective. Such situations call for a new repertoire of methods to address the unique features of failures. This paper will introduce possible alternative ways of looking at and constructing failure stories. The techniques described below come under the umbrella term forensic analysis. The insights obtained from forensic analysis can be used for internal learning within organisations as well as externally within the discipline, thereby enabling practitioners worldwide to benefit from the mistakes of others.
Theory-based information systems research : the role of phenomenological hermeneutics : research articleAuthor L. WhittakerSource: South African Computer Journal 2004, pp 98 –110 (2004)More Less
Interpretive methods of research are well established in the field of information systems. In general, however, such research is empirical in nature, relying on the principles of hermeneutics to inform the gathering and interpretation of primary data. Hermeneutics as method, however, has its origins in textual interpretation, and it is thus equally applicable to theory-based research approaches.
The purpose of this paper is to propose that a phenomenological hermeneutic approach can be appropriate for information systems research. In order to demonstrate this, the paper discusses the nature of phenomenological hermeneutics, the use of hermeneutics as method and the application of hermeneutics to a particular area of IS research, namely IS evaluation. The intention is to demonstrate that useful insights into a real world problem can be gained through the interpretation of appropriate theory and secondary data.
A theory-based approach must, however not only be shown to be possible, but must, if it is to be useful within the discipline, have some criteria for evaluation of the approach itself. The paper therefore proceeds to propose some criteria for the assessment of theory-based phenomenological hermeneutic studies, which are based on existing criteria for the evaluation of hermeneutic studies.
Source: South African Computer Journal 2004, pp 111 –112 (2004)More Less
The growth in the use of the Internet and the World Wide Web has led to new types of security risks. From the moment a business obtains a web presence, there is the potential for the business systems in the organization to be exposed to security and confidentiality breaches across the entire Internet . Any link to the Internet makes a business vulnerable and creates a potential intrusion risk.