Abstract

The concept of “adult learning” is largely accredited to Malcolm Shepherd Knowles as the pioneer of research into this area of study. Many theories have since then been proposed and advanced with regard to adult learning. Major researches in this field have been done by Albert Bandura, Marriam and Caffarella (1999), Collins (1991), Gruber (1973), Gordon Ross (2002), Brookfield (1980s), the list goes on. The research findings are characterized by conflict of views and emergence of new ideas focusing on adult learning, also commonly referred to as andragogy. In most cases, the processes involved in studying adragogy are extensively compared and contrasted with those observed in child education, or otherwise known as pedagogy. This paper focuses on the theoretical perspectives of adult learning proposed by the various scholars including Albert Bandura. Key Words: Andragogy (Adult learning); Pedagogy (Child learning):

Introduction

Researches into learning process have been one of the most paramount scholastic researches that have been conducted in different fields of study. One of the commonly referred to of such research was that which was conducted by Albert Bandura back in the 1970s.Bandura investigated into the effects of the theory of learning on individuated exposed to different sources of knowledge (Bandura, 1973). In his “social learning theory”, Bandura posited that “individuals can learn or borrow knowledge and ideas from each other by mere observations, imitations and modeling”. 

For instance, characters who serve as models for aggressive behavior may be attended to by viewers and depending upon whether the behaviors are rewarded or punished, would either inhibit or encourage imitation of the behaviors” (Bandura, 1973).

According to Bandera’s theory of social learning, children are more vulnerable to new and influential ideas which may contribute significantly in cultivating their behaviors as they grow into the adulthood age bracket (Bandura, 1973). 

Theories of adult learning are characterized by common basic concepts acting as variables and providing the basis on which arguments have been built. Experience and behavioral change are perhaps the most utilized variables in this research. Merriam and Caffarella (1999) observed that commencing 1950s, “the very basic definitions of learning were established around ideas of change and behavior”. According to these researchers, this initial conception triggered the emergence of new ideologies and theoretical frameworks. The question of whether performance was based on learning or learning had no impact on shaping human behaviors remained a question of debate (Bandura, 1973).

Subsequent to the complexity that was already arising in understanding the learning process, Jean Piaget proposed cognitive development stages. Piaget contended there were “four invariant phases of cognitive development in relation to age”. Formal operations was his final stage in cognitive development, a stage between the age of twelve and fifteen. The argument under this stage was that “normal children reached the final development stage of development between the age of twelve and fifteen” (Bandura, 1973). This phase was later renamed “the problem solving stage” by Brookfield (1986). Arlin posited formal thought was not a one stage process as Piaget believed, but a stage composed of other two distinctive stages (Bandura, 1973). Arlin`s hypothesis, however, generated more debate, escalating into more questions than answers, “opening doors to understanding of the adult learning” and drawing the attentions of many intellectual thinkers. This paper pays attention on the various views raised by different thinkers on this issue, and to what extent this may have redefined the general understanding of adrogogy.

Literature Review

Gordon Ross did an extensive review on adult learning and came up with a set of what he referred to as “the important facets and perspectives of adult learning theory” (Brookfield, 1984). These were inclusive of behaviorism, constructivism, humanism, critical perspectives and “personal responsibility orientation”. Authors like Albert Bandura (1973) had a rather different view of the entire process. In his school of though, “adult learning was an interactive relationship of both theory and practice (Bandura, 1973).His research was build upon “quantitative studies, quantitative measures and leaning projects”. The general conception of this research study was that “adult learners studied a particular theory and then put the theoretical knowledge into practice” (Bandura, 1973).

The finding father of the concept of adult leaning is commonly believed to be Malcolm Knowles. Knowles’s work paved way for the study and research on adult learning which was to become an exceptional field of study for many scholars from different fields. Knowles began by giving his precise definition of what he thought about adult learning. According to him, “adult learning was the art and science of helping adults to learn” (Knowles, 1968). He went ahead to compare and contrast between adult learning and child learning. Child learning, as he defined it, “was the art and science of helping children to learn” (Knowles, 1968). Knowles’s studies “were based on the assumptions that there existed substantial identifiable differences between adult learners and learners below the age of eighteen years” (Merriam and Caffarella, 1999).To Knowles, adult learning was a more self directing process facilitated by experience, internal motivations and high level of the anxiety to apply what has been learnt. Adult learners were mostly attracted to development oriented tasks (Merriam and Caffarella, 1999).

The works of Malcolm Knowles became popular between 1960s and 1980s, more so after he concocted and popularized the concept andragogy to mean “the art and science of helping adult learners” and pedagogy to mean “the art and science of helping children to learn” (Merriam and Caffarella, 1999).He emphasized andragogy was a newly emerging technology which would “facilitate the development and implementation of learning activities in adulthood” (Merriam and Caffarella, 1999).Andragogy as it has been understood  was a technology built on five key presumptions. The five building assumptions included experience, self concept, orientation, readiness and motivation (Merriam and Caffarella, 1999).

Knowles held that as an individual became more mature, he or she gradually moved from being dependent to being self-directness. The experience that came with adulthood played an important role in adult learning and “the learning readiness of adult individuals was closely related to the conception of new social roles”. Adult learners had greater level of orientation, desire to apply what has been learnt, and lastly, motivation for learning increased in maturity and were influenced by internal factors (Knowles, 1968 & Bandura, 1973).

The prevailing theories on adult learning have been based on the five assumptions proposed by Knowles. Collins (1991), delved into the aspect of experience which they pointed out as the most conspicuous in “a person’s ability to create, retain and transfer knowledge” (Collins, 1991).

The theories of adult learning have also had very little consensus, with continued debates going on and many theories being developed to help understand the learning process taking place in adulthood. Broad range of theories available has made the study even more complex, with theorists attempting to actualize their theoretical perspectives. Because of the many theories, some researchers have labeled them into groups. For instance, “the stimulus-response and cognitive theories” advanced by Bower and Higard (1966) and “the organismic and mechanistic theories” proposed by Merriam and Caffarella (1999) are different groups of theories on adult learning.

Knowles` views on learning have, however, not escaped the trap of criticism. Merriam and Caffarella (1999) in their literature reviews questioned whether the idea in Knowles` work was to develop a theory on teaching or a theory on adult learning. These researchers emphasized it’s not apparent as to whether what Knowles presented in his work was “a theory on adult learning or a theory of teaching” (Merriam & Caffarella 1999, & Bandura, 1973). This stand was backed up by Brookfield (1986) who contended Knowles must have failed to accentuate or even prove his theory far from his preliminary work coupled with well grounded principles of good teaching and learning practice.

Some other researchers, the caliber of Starbuck and Hedberg (2003), suggested the art and science of learning was a process influenced by both situational and environmental factors. These group of researchers held that environmental and situational circumstances could either promote or make learning impossible to certain individuals. They added some circumstances were established by “the structure of organizations, time constraints and either negative or positive environmental conditions” (Brookfield, 1986).

Arguing on the basis of “Multiple Intelligences”, Howard Gardner presents a group of “theorists who discarded the idea of one type of intelligence measured by modern psychometric instruments” (Brookfield, 1986).Gardner believed not only one type of intelligence exists, but seven. Constituting his list are “linguistic intelligence, logical arithmetic intelligence, spatial intelligence, musical intelligence, bodily kinesthetic intelligence, interpersonal intelligence and intrapersonal intelligence” (Bandura 1973 & Brookfield, 1986). He argues further both the linguistic and the logical arithmetic intelligence are measured by the Intelligence Quotient test shorted as IQ test. Naturalist intelligence became Gardner`s eighth intelligence factor which he described as “the ability to recognize classify the living species, flora and fauna” (Brookfield, 1986).

With regard to internalization of new information, Collins (1991) discussed three basic categories of theorists. First, the dualistic thinkers, those who believe in absolute truth. They spend their time knowing only one truth for every aspect, and have difficulty in internalizing the truths found in the “shades of grey”, that is, the truths that do not come in black and white as clear cut. The second category was that of multiplistic thinkers, those who are programmed to learn by analyzing multiple truths to find one right answer. They differed from the former group since they believed that there could be several solutions, but only one is applicable. Then the dualistic thinkers who tended to believe there was only one solution. The final category was the relativists` thinkers. This group of thinkers believed the truth or solution to a problem was relative and situational. They were more capable of dealing with situations that are neither presented in black or white (Bandura, et al, 1973).

Research Findings

To exhaustively understand the patterns of self directed learning, it is crucial to discuss the differences in how people obtain and internalize new information. This is important because as people develop and familiarize with a pattern of acquiring and internalizing new information, it forms the basis of knowing the unique needs of adult learners and how to meet the needs (Merriam & Caffarella 1999 & Bandura, 1973).

Considering the above scenario, adult learners can perform exceptionally well when other circumstances are put in place. This will involve putting the learners in an environment where they are respected for who they are. According to the research findings of (Knowles, 1968), the learners will also be motivated if they are placed in learning groups with their peers). Most learners will be motivated if they share past learning experiences, work related or career experiences and if they are age mates. 

Adult learners are mature people still under the instructions of a tutor to expand their knowledge. They are considered non-traditional students since they differ from school going students with respect to their age, experiences and dependence. This is because most of them are well advanced in age and instead of depending on someone, they actually have people who depend on them. These people can be spouses or children and other dependants or relatives. School going children are considered traditional students since they fall in the right category of age and experience level to be under the care of an instructor (Merriam & Caffarella 1999, & (Knowles, 1968).

The traditional students fall in the group of leaning called k-12 learners. This is basically a summary of ‘kindergarten to grade 12’ learning that reaches a minimum of 16 year of age. This is usually the basic leaning requirements for an individual to acquire skills such as reading, writing and even critical thinking in the advanced levels of learning. Anybody undergoing the k-12 system and is above 18 years of age can be considered as an adult learner or in this case, a non- traditional student (Knowles, 1968 & Collins, 1991).

Adults are different from children in many ways. The basic characteristics of an adult can promote or hinder their learning capability. Most adults are independent and may have people who depend on them. This factor can be considered in different ways depending on the adult in question. Due to the state of independence, some adults are capable of learning without being given a lot of attention by the educators. Others may require special attention despite their independence in order to learn and understand (Collins, 1991). 

It is also a common belief that maturity increases with advancement in age. However, this assumption may be a factor that is still under dispute because some people show more maturity at tender ages than their seniors (Bandura,1973, Merriam & Caffarella 1999,  & Knowles, 1968). 

Motivators are important factors in improving the experience of learning. The motivators for learning can be external, like awarding good performance and punishing poor performance. These motivators can also be internal. This is seen in the personal drive to acquire knowledge and the satisfaction derived from this, whether it is physically rewarded or not. Whether internal or external, motivators are useful in adult education to increase interest. Most adults, however, find internal motivation very effective as it encourages them to acquire various skills and knowledge (Collins, 1991)

The adult learners or non-traditional students are all mature people. This means that over the course of their lives they probably acquired skills, interests and various experiences. Learning becomes more meaningful when it relates to their field of knowledge. This impacts significantly into their interests since most learners grasp the contents when they are taken from the known to the unknown (Knowles, 1968 & Bandura).

In order to fully participate in the learning process and enjoy the experience, adult learners may require some aspects of learning. All learners, traditional or non-traditional students come into the learning arena with their unique backgrounds, experiences and personalities. Non-traditional students who are considered mature are more aware of their uniqueness and individuality. Taking this into account will greatly motivate them to learn by feeling appreciated and noticed as individuals (Collins, 1991 & Gardner, 1993).

Apart from their uniqueness, these adults have differing status and abilities. Recognition in terms of status and effort to utilize some of the abilities they posses will also motivate learning. This is possible because they will have a sense of meaning and direction (Collins, 1991).

Discussion

To encourage adult learning, the learning process should be tailored to relate to the experiences in life. This will motivate the learners, as they endeavor to apply the new knowledge. It’s also apparent that adults are visionary learners, and therefore being able to see the end product right from the time they begin the process of learning is one of their desires. The ability to visualize the end product of their learning experience is a major factor in motivating the learning process. The learner should be able to see the knowledge they acquire playing a vital role in their future to motivate them.

Simply put, adult learners are more focused on their goals and ambitions, and they will do all it takes to achieve their dreams. If we have to put this into a better theoretical perspective, then we may be obliged to consider the stand of Collins (1991). As per the argument of this thinker, “adult learning is an interactive relationship of both theory and practice” (Collins, 1991).

Conclusion

The concept of “adult learning” has largely been accredited to Malcolm Knowles who is also believed to be founder of the concept adrogogy, “the art and science of adult learning”. Many theorists like Bundara (1973), Marriam and Caffarella (1999), and Collins (1991) borrowed significantly from Knowles’s work. The research findings have been characterized by conflict of views and emergence of new ideas. In conclusion therefore, it can be noted that the many researches have helped shape up the understanding of adult education.

Annotated Bibliography

Bandura, Albert. Aggression: A social learning analysis. Englewood Cliffs, NJ: Prentice- Hall, 1973.43-60.

Albert Bandura is renowned for his theory of social learning, in which he Argued “individuals can learn from one another through observation, imitation, and modeling”.Bandura was a behavior list theorist who strongly believed in 

behavior change, “modeling and reciprocal determinism”. In addition to his theory  On social learning, Bandura has published works on “social foundation of  Knowledge and action, Self efficacy, principles of behavior modification” among Others.

Bandura, A. “Social Foundations of Thought and Action”. Englewood  Cliffs, NJ: Prentice-Hall. 1986 Bandura, A. (1997). “Self-efficacy: The exercise of control”. New York: W.H. 

Freeman Bandura, A. (1969). “Principles of Behavior Modification”. New York: Holt, Rinehart & Winston.

Brookfield, S. “Understanding and Facilitating Adult Learning”. Jossey-Bass. San  Francisco.1986.

In this work, Brookfield Stephen investigates into the factors that facilitate learning in adulthood stages.

Collins, M. “Self-directed learning and the emancipatory practice of adult education, 

Re-thinking the role of the adult educator”. “Proceedings of the 29th Annual Adult 

Education Research Conference”. Calgary University. 1991.

Collins, M is a senior researcher at the University of Calgary

In this report, Collins compiled research findings of the 29th Annual adult  Conference. The emphasis in the report was that adult learning was a process 

Characterized by both theory and practice. 

Knowles, M. “The Modern Practice of Adult Education”: “Andragogy versus pedagogy”. 

Englewood Cliffs: Prentice Hall- Cambridge.1984.

Malcolm Knowles was an Educational researcher at the State University, North  Carolina. He is accredited as the finding father of adrogogy, “the art and science 

of adult education”. In this work, Knowles has discussed elaborately on the  distinctive factors influencing adult learning. Knowles worked for over  30 years in adult education and clinical psychology fields. He studies leaning 

Process as an art and science.

Merriam, S. B. & Caffarella, R.S. “Learning in adulthood”: A comprehensive guide. 

San Francisco, CA. Jossey- Bass Inc. 1999. Merriam and Caffarella are Educational researchers whose works have greatly Contributed in the study of adult learning. In addition to their work titled “Learning  in adulthood published in 1999, they have also conducted a research on  Androgogy and self-directed learning”, investigating into “the pillars of adult  Learning theory” in 2001.

Abstract

Software testing technology has incontestably undergone a tremendous development, with different software engineers playing significant roles. The technology dates back to centuries of continuous development, precisely commencing from 1980s when Gelperin and Hetzel, computer scientists, pioneered a modality of classification (Hetzel, 1990, “The Growth of Software Testing”). They categorized software testing development into phases and goals. The year 1956 was dominated by software debugging which made no clear distinction from software testing. Software debugging and testing went through enormous lab demonstrations between 1957 and 1978, leading to disjunction of software testing from debugging. This period was followed by destruction of software errors between 1979 and 1982. As from 1983 to 1987, there were series of software evaluations, with the aim of upholding quality in software technology. Software testing picked up in 1988 when the focus of software developers was to prevent possible failures and users` state of discontent (Hetzel et al, 1990).This paper delves into the processes involved in software testing and development of software testing from manual to automated. To help understand the technology behind software testing and its development, an elaborate literature review on previous research works has also been given a salient attention.

Introduction

Software testing is the process of evaluating the condition of a software product and analyzing the result findings in attempts to make necessary improvements or changes (IEEE, 1990, “Standard Computer Dictionary”). The test “is done under controlled conditions which should include both the abnormal and the normal conditions. According to Black (2008), “it is the process of executing a program or system with the intent of finding errors”. The test can also be done “as the process of validating and verifying that a software program meets specific prerequisites” and works to the expectation of users (Yang, & Chao, 1995).

The testing process can be done manually or automated, depending on which method is preferred. Most tests will take place only when identifying specific test objectives and coding process completed (Dustin et al, 1999). This implies the method employed in carrying out the test is also governed by the methodology used in development of the software to be tested.

Software testing is often very expensive, and Automation is a better strategy to reduce time and cost. Software testing tools and techniques usually suffer from lack of generic applicability and scalability (Yang, & Chao, 1995). The reason is explicit. For automation to be possible, we must to have some means to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Up to date, no fully scaled system has achieved this objective, since significant amount of human interventions are still needed in testing. The level of automation remains at the automated test script level (Dustin et al, 1999). 

Tests are declared to be clean or positive if aimed at validating the products. Validating that the software works for a particular case is the impediment. A finite number of tests can not validate that the software works for all situations. Since only one failed test is enough to prove that the software does not work, a countable number of tests cannot prove that the software works for all circumstances. The process of disapproving that software does not work is referred to as dirty or negative test. For a piece of software to survive a reasonable level of dirty tests, it must have adequate exception handling capabilities (Yang, & Chao, 1995).

  Software certainty has important connection with many aspects of software, including the architecture, and the amount of testing it has undergone. Based on an operational level, testing can function as a statistical sampling methodology to gain failure data for reliability estimation (Dustin et al, 1999). 

Software testing is not complete. It still remains an art since it has not been scientifically proven. Testing techniques are static, leading to the use of some old methods invented over 25 years ago, some of which are crafted or heuristics rather than good engineering methods. Software testing is not as expensive as not testing software, especially in places that human lives are in danger. Solving the turing halting problem is easier than solving the software testing problem. One cannot therefore be sure of the correctness of  apiece of software since no verification system can tell every correct program neither can we be certain that a verification system is correct (John, Philip, & David, 1988). 

There are a number of software testing ways and techniques and are categorized depending on the purpose and life cycle phases. Depending on purpose, correctness, performance, reliability and security testing are conducted. When categorized on basis of life cycle, it can be classified into, requirements, design program, evaluation, installation, and acceptance and maintenance phases testing.  

Correctness is the least requisite of software, the vital intention of testing. It needs some type of herald, to tell the accurate behavior from the erroneous one. The tester may or may not know the inside details of the software module under test. For instance, control flow, data flow, etc. For correctness, a white-box point of view or black-box point of view can be taken in testing software. 

Black-box testing mainly refers to functional testing, a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box, only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness (John, Philip, & David, 1988). All test cases are derived from the specification. No implementation details of the code are considered. 

It is understandable that the more covered in the input space, the more problems are found, hence, confident about the quality of the software. However, exhaustive testing of the combinations of valid inputs will be impossible for most of the programs, not to mention invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major barrier in functional testing. We can also never be sure whether the specification is correct or complete. 

According to Beizer (95), “threats of language barriers in which specifications are used ambiguity is often unavoidable”. Even if one uses some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem, it is not possible to specify precisely every situation that can be encountered using limited words. People “can seldom specify clearly what they want” (Hetzel, 1988). What they want after they have been finished. Specification problems contribute to about 30 percent of all bugs in software.

A number of techniques are found in white-box testing. This is because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once, cross every branch statements, or cover all the possible combinations of true and false condition predicate (Hetzel, 1988). 

As opposed to black-box testing, “software is viewed as a white-box or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester”. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. 

Loop control-flow, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover “unnecessary dead code”, code that is at some extent of no use, or never get executed at all, that which can not be discovered by functional tests (Hamlet, 1994). 

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. Boundary between “black-box approach and white-box approach” is however not a clear-cut (Hetzel, 1988). Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for “transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text”. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that “the idea of specification itself is broad”, it may contain any requirement including the structure, programming language, and programming style as part of the specification content (Hetzel, 1988). 

We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Lab research has shown that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies. 

Not all software systems have specifications on how they work; instead each system has inward performance requirements. The software must not take unlimited time and resource should not take infinite time or resource to execute. At times, "Performance bugs are utilized to refer to design problems in software that cause the system performance to deteriorate” (Hetzel, 1988). 

The possibility of failure-free operation of a system is known as software reliability. It is related to many characteristics of software that includes the testing process. Directly estimating software reliability by quantifying its related factors can be tiresome. Automated testing is an appropriate sampling method to measure software credibility. By the use of the operational level, “software testing can be used to detect failure data”, and an estimation model can further be used to analyze the data to estimate the current reliability and predict future believability. Hence, depending on the estimation, the engineers decide whether to release the software, while the users can decide whether to install and use the software. 

Robustness of software components is in “its ability to function correctly in the presence of excessive inputs or stressful environmental conditions” (James, & Bret, 2001). In robustness testing, functional correctness of software is not of concern while incorrectness testing it is mandatory. Robustness testing only checks for robustness problems such as machine crashes, process hangs or abnormal termination. For instance, “the oracle is comparatively simple, as it can be made more portable and can be expanded more than correctness testing” (James, & Bret, 2001). Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads. 

Software quality, reliability and security are highly dependent. Flaws in software can enable hackers and crackers to eavesdrop and open security holes (John, Philip, & David, 1988). Many critical software applications and services have complex security measures against malicious attacks. Objective of security testing of these systems include identifying and removing software flaws that may potentially lead to security breach, and validating the robustness of security measures. Simulated security attacks can as well be performed to find susceptibility. Testing is potentially endless. We can only test when all the defects are revealed and removed (John, Philip, & David, 1988). At some point, we have to stop testing and ship the software. The question is when. 

Realistically, testing depends on finance, time and quality. It is determined by profit models. The discouraging and unfortunately most frequently used approach is to stop testing whenever any of the allocated resources are exhausted. Automation of the testing pores thus help reduces on all the expenses incurred. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost. This often requires the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires “recurrent running of the following cycle, failure data gathering, modeling -prediction”. The method does not fit well for “hyper-dependable systems” given that the actual field failure data will take too long to accumulate (James, & Bret, 2001). 

Literature Review

It is prudent for all programming analysts to have an idea on what software engineering is all about. The concept of software engineering if well understood will makes an analyst the best on his or her job.

The hands on knowledge on software engineering is basically found in the four phase past reports of the use of computers. These are in the 1950s, early and mid 1960s, early and mid 1970s, and finally early and mid 1980s, up to the present (Hetzel, 1988).

1960s saw the computing industry experience some major issues. The problems experienced during this era of computing could stem from several human, environmental, and economic factors. The human factor was due to software programming being more and more prominent in daily human activities. This led to establishment of machine at a speed which the programmers found hard to cope with (Hetzel, 1988). Environmental and economic factors resulted from the pressure put on programming infrastructure and the monetary allocation for the computer programming effort (James, & Bret, 2001).

The turn of the century has seen these problems sticking to the human race like a cancer. And like the cancer the problem has grown with the growth of programming culture taking a more current form of past issues. During system development life cycle, there are several considerations that should be taken into account by the system analyst and the major stakeholders, with the aim to adopt to the new system (Hetzel, 1988). These considerations are in the areas of overall cost of implementing a new system, the reviews on how the system will function and the problems likely to occur during implementation of the new system (James, & Bret, 2001). 

Once the stakeholders realize the importance of looking reading the small print on manuals and worrying about every detail, the problem will be taken care of. It is important for them to be clear on how the system works and makes this known to the various players in the industry (Dustin, et al, 1999). This will not only minimize the problems occurring in the life cycle but will also create a clear understanding by all who are involved to erase any misconceptions and over expectations placed on the system and its developers (Dustin, et al, 1999).

There is complexity in understanding how a system works without placing it in the correct concept and perspective (Hetzel, 1988). This is the same deal with all forms of software products which are best understood when viewed in the correct mirror. This will in turn improve an understanding on what drives them and possibly minimize some hitches that they cause (Hetzel, 1988).

This brings us to the conclusion that a little organization in the overall system process will go a long way to improve the general picture. The dynamic nature brings into play various environmental and human factors surrounding the process of system development which need to be dealt with (Philip et al, 1997). In the long run, obtaining the most optimal system possible will require a number of considerations to be made in several areas (Philip et al, 1997).

Once you have a full grasp on the issues above, you can understand that software testing is really the backbone of this process. This is because it is a crucial factor on determining the feasibility of the entire project.

Software testing which a complex aspect can mean different things to different people. To come up with an overall view will take us understanding the basic factor of software testing. This factor is testing itself (Yang. & Chao, 1995).

All along in the generations of man, there have been various views on the aspect of testing. A good view for this case is by D. Gelperin und W.C. Hetzel in 1988 that made their own conclusions on maters of tasting that are practical (Yang. & Chao, 1995). The five period’s analysis runs from 1956 which saw no difference between testing and removing bugs and defects (Philip et al, 1997). This period generally focused on the processes involved with removal of defects from a system, paving way to a period from 1957 to 1978 that was characterized by establishment of the requirements stage of a new system lifecycle (James, & Bret, 2001). The requirements had to meet the demands of the new system and its stakeholders. It was from the successful differentiation of the processes involved in testing and bug removal (Philip et al, 1997). 

The period between 1971 and 1982 saw a real focus on debugging. The period was called destruction oriented period. 1983 to 1987 witnessed the analysis move to focusing on product details, providing an overview of what was expected from the product being developed. Finally, 1988 was the period when caution was the driving force where software developers considered dealing with problems issues before they could occur (Hetzel, 1988). 

Programmers also needed to ensure that coding system produced the results expected. They made it possible through software analyst, to make sure the codes were functional by making them meet a required set of conditions. Details on unit testing can be found in a 2006 survey for TDD which however is not considered the determinant of this factor (Hetzel, 1988).

Another TDD method can be applicable in extreme programming which is agile. There are other methods like V-Model or Rational Unified Process (RUP), which are different from agile method in their performance levels which are higher. XP upgrade has seen improvement on communication, interaction and satisfaction of both parties in system development due to simplicity (Hetzel, 1988).

Test frameworks like Junits are important in unit testing which is important for specification. These call for the system to be functioning at all times with the least occurrence of dysfunctions (Hetzel, 1988).

The third mode is the use of team pairs. People work in pairs so that any instance overlooked by one person is noticed by their partner on the assignment. The error rates are further minimized by continuous swapping of partners to ensure optimal output (John, Philip, & David, 1988).  This is called pair programming which although initially expensive proves to save on a lot of resources in the long run.

Unlike BDD, TDD focuses on the tests and on overall outlook of the program. How a person thinks is influenced by language. BDD uses this aspect to build on TDD. This moves the focus from what it is to how it functions (John, Philip, & David, 1988). 

Software testing is more complicated toward attaining better quality. Using testing to locate and correct software defects can be an infinite process. Bugs cannot be completely removed. Just as the complexity barrier indicates:  Testing and fixing problems may not necessarily improve the quality and reliability of the software. At times it may introduce much more severe problems into the system, happened after bug fixes, such as the telephone outage in California and eastern seaboard in 1991. The disaster happened after changing 3 lines of code in the signaling system (John, Philip, & David, 1988). 

Using formal methods to "prove" the correctness of software is also an attracting research direction. But it can not surmount the complexity barrier either and only work well for relatively simple software. It does not scale well to those complex, full-fledged large software systems, which are more vulnerable to error (Hetzel, 1988). 

In a broader view, the utmost purpose of testing must be questioned. Whether there are effective testing methods anyway, and since finding defects and removing those does not necessarily lead to better quality. An analogy of the problem is like the car manufacturing process. In the craftsmanship epoch, we make cars and hack away the problems and defects (Hetzel, 1988). But such methods were washed away by the good quality engineering process, which makes the car defect-free in the manufacturing phase. Bug, that resulted in incorrect indicators of signal strength in the phone's interface. Reportedly customers had been complaining about the problem for several years. The company provided a fix for the problem several weeks later (John, Philip, & David, 1988). 

Email services of a major smart phone system were interrupted or unavailable for nine hours in December 2009, the second service interruption within a week, according to news reports. The problems were believed to be due to bugs in new versions of the email system software. It was reported in August 2009 that a large suburban school district introduced a new computer system that was 'plagued with bugs' and resulted in many students starting the school year without schedules or with incorrect schedules, and many problems with grades. Upset students and parents started a social networking site for sharing complaints.

In February of 2009, users of a major search engine site were prevented from clicking through to sites listed in search results for part of a day. It was reportedly “due to software that did not effectively handle a mistakenly placed in an internal ancillary reference file” that was frequently updated for use by the search engine (Hetzel, 1988). Users, instead of being able to click through to listed sites; they were redirected to an intermediary site which, as a result of the suddenly enormous load, was rendered unusable. A large health insurance company was allegedly banned by regulators from selling certain types of insurance policies following the ongoing computer system problems that resulted in denial of coverage for needed medications and mistaken overcharging or cancellation of benefits. The regulatory agency was quoted as stating that the problems were posing "a serious threat to the health and safety of beneficiaries” (John, Philip, & David, 1988).

A news report in January 2009 indicated, “a major IT and management consulting company was still battling years of problems in implementing its own internal accounting systems” (Dustin et al, 1999). 

In August , 2008 “it was reported that more than 600 U.S. airline flights were significantly delayed due to a software glitch in the U.S. FAA air traffic control system” (Dustin et al, 1999). The problem was claimed to be a 'packet switch' that 'failed due to a database mismatch', and occurred in the part of the system that handles required flight plans (Hetzel, 1988). 

A major clothing retailer was reportedly hit with significant software and system problems when attempting to upgrade their online retailing systems in June 2008. Problems remained ongoing for some time. When the company made their public quarterly financial report, the software and system problems were claimed as the cause of the poor financial results (Hetzel, 1988).

Software problems in the automated baggage sorting system of a major airport in February 2008 prevented thousands of passengers from checking baggage for their flights (Hetzel, 1988). It was reported that the breakdown occurred during a software upgrade, despite pre-testing of the software. The system continued to have problems in subsequent months (Dustin et al, 1999).

News reports in December of 2007 indicated that significant software problems were continuing to occur in a new ERP payroll system for a large urban school system. It was believed that more than one third of employees had received incorrect paychecks at various times since the new system went live the preceding January, “resulting in overpayments of $53 million” (Dustin et al, 1999). 

An employees' union brought a lawsuit against the school system, “the cost of the ERP system was expected to rise by 40% and the non-payroll part of the ERP system was delayed” (Dustin et al, 1999). Inadequate testing reportedly contributed to the problems. The school system was still working on cleaning up the aftermath of the problems in December 2009, going so far as to bring lawsuits against some employees to get them to return overpayments (Dustin et al, 1999). 

In November of 2007, a regional government brought a multi-million dollar lawsuit against a software services vendor, “claiming that the vendor minimized quality in delivering software for a large criminal justice information system” and the system did not meet requirements. The vendor also sued its subcontractor on the project (Yang, & Chao, 1995).

In June of 2007 news reports revealed that software flaws in “a popular online stock-picking contest” could be used to gain an unfair advantage in “pursuit of the game's large cash prizes”. Outside investigators were called in and in July the contest winner was announced. According to the report, the winner had previously been in 6th place, indicating that the top 5 contestants may have been disqualified (Yang, & Chao, 1995).

A software problem contributed to a rail car fire in a major underground metro system in April of 2007.The software reportedly failed to perform as expected in detecting and preventing excess power usage in equipment on new passenger rail cars, resulting in overheating and fire in the rail car, and evacuation and shutdown of part of the system (John, Philip, & David, 1988).

News reports in May of 2006 described a multi-million dollar lawsuit settlement paid by a healthcare software vendor to one of its customers. It was reported that the customer claimed there were problems with the software they had contracted for, including poor integration of software modules, and problems that resulted in missing or incorrect data used by medical personnel (James, & Bret, 2001).

A newspaper article reported “major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling” (James, & Bret, 2001). In the article, “an automotive software specialist indicated that the automobile industry spends $2 billion to $3 billion per year fixing software problems” (James, & Bret, 2001).

Media reports in January of 2005 detailed “severe problems with a $170 million high-profile U.S. government IT systems project”. Software testing was one of the five major problem areas according to a report of the commission reviewing the project (James, & Bret, 2001). 

Research Methodology

In the software industries, the utilization of implicit and explicit ratings as a research methodology is common and obvious to the scientific researchers, just like grading system is necessary in the learning institutions to evaluate the students` performance records. The caliber of Alton Scheid provided an elaborative literature in this particular faculty of computer science (Anick, 2003).

Ratings done on a given scale enables researchers to make precise judgments and come up with statistically processed figures that can then be used in assessing a situation in question. In the field of computer and Information technologies, software engineering, the most predominant of the scientific research methodologies are the explicit and implicit approaches (Morita & Shinoda, 1994).

Implicit Rating

According to Oard and Kim (1996), implicit feedback techniques are effective in obtaining information on the Users` behaviors, which are important in determining the preferences and interests of different users of the software. They, Oard and Kim, grouped the observable characteristics of users as minimal scope and behavior category, in which case, the minimal scope or class represented the smallest feasible segment of the item that was being executed by the user, while the behavior category referred to the examination and annotation of users’ behaviors (Oard and Kim, 1996),.

Explicit Rating.

Unlike the implicit rating methodology, explicit approach is more like a field questionnaire, engaging the physicians directly in giving information about them to be used in analyzing their preferences. Oard and Marchionini Oard & Marchionini, (1996) emphasized that even though explicit rating system is also very important, it is prone to biases and data inaccuracy. For this reason, they proposed that both the implicit and explicit data rating systems should collaboratively be used to improve on feedback quality.

In line with the researchers arguments, the caliber of Oard, research on software engineering should exploit exclusively both the implicit and the explicit methodologies of research in obtaining data and information from end users, as the feedbacks are likely to shade more light on the possible way forward that would help develop the software technology  and improve success for physicians and practitioners whose interests and demands for information variety are rapidly increasing with time (Morita & Shinoda, 1994).

Other Strategies

The preceding research literatures can also be employed in endeavor to acquisition of the most accurate data. These are inclusive of the following previous researches providing a group of varied indicators of users’ interests both explicitly and implicitly, in attempt to address the question of how the user behaviors can be utilized as the implicit measures of their interests.

With the aid of other scientists, Morita & Shinoda (1994) experimented the users` reading time to automatically re-rank sentence-based synopses from the documents retrieved by the users. Performances from the implicit system were also considered in the research. From the work of Morita and Shinoda (1994), they explored on the behaviors of users through assessment of newsgroup articles which to them could be utilized as implicit results necessary for the acquisition of the users` profiles and information filtering (Morita & Shinoda, 1994).

Research Findings

There are different models in software development life cycle. These are inclusive of the waterfall model and the spiral model (Hetzel, 1988). Though the models tend to differ, they basically operate on certain common principles clustered into a number of phases. The stage of software development requirements is essentially characterized by software testing processes (Hetzel, 1988). In all the models of software development cycle, testing tend to come in initial stages , first to ensure the final product in the system development life cycle is up to the task it was intended. AdaTEST and cantata are useful products for testing software performance and very useful in the software testing stages. These are both products of ISO which was endorsed by ISO9001 in 1988 (Smith, 1990).

Software requirements stage involves extensive software testing. Software testing is done to ensure the program being developed has no major hiccups during implementation stage (Philip, John, Christopher, & Daniel, 1997). It is done by exposing the program to conditions that are likely to result in poor performance of the program then finding and eliminating errors. After software testing, the stakeholder, in this case, the end users of the new software product can feel assured that the product they are using is of the best performance level. The test is thus conducted to ensure that customer contentment in the new software product or service is upheld (Philip et al, 1997).

When software testing is not performed with care and caution, the new system can be susceptible to bugs. Bug is a jargon used mostly by software scientists when referring to abnormal features detected in a software system and which can cause errors in the performance of the software programs, leading to end user dissatisfactions in using a program (Gelperin, 1988, & Smith, 1990).

In order to avoid bugs and system failure, the programmers set out to discover all possible hitches, though not every bit of error is tracked down. Discovering the technical hitches in the system encompass a theoretical analogy of the new system to be implemented by either a perfect system or a system that has been used before and which was successfully utilized by end users (Cornett, 1996). With these two key comparisons, the programmer can detect what will possibly work in the system when implemented and that which is prone to technical failures which may cause the program to perform below expectations (Cornett, 1996, & Beizer, 1990). A theoretically perfect system will give the stakeholders a general overview of how the system is expected to run. This will be reflected on how the program is actually functioning. A former system that reached the required standards is necessary to act as a frame of reference by giving information on what can work and what is not practically executable (Cornett, 1996, & Beizer, 1990).

Apart from discovering the possible system errors that could occur after system implementation, software testing has another critical function (Yang. & Chao, 1995). This is to determine the usefulness of the system to the program users once implemented. The end users of the new system have unique needs which must be addressed during software testing to ensure their satisfaction. Software testing will prove that given software is customized to meet the requirements of a security firm if this happens to be the end user, or the unique needs of a student using the system in college (Yang. & Chao, 1995).

The automated software testing came into place to overtake the manual testing routine. Research findings have shown that even though is also preferred, as it also tracks down many software defects the same way the modern automated test can do, it is devious and time consuming. Automating the test is purposed to speed up the process in real time technology (Beizer, 1990).The automated software testing technology has more so been preferred for testing software products with long lasting lifespan. As from the 1983, there were series of software evaluations, with the aim of upholding quality in software technology. Software testing picked up in 1988 when the focus of software developers was to prevent possible failures and users` state of discontent (Hetzel et al, 1990). This goal and motive paved ways for the development of automated software testing. The work presented here re-investigates into the fundamentals and scientific principles surrounding software testing technology. To help understand the technology behind software testing and its development, an elaborate literature review on previous research works has also been given a salient attention.

Discussions and Recommendations

It’s a common knowledge that software programs operate in computers. In the current high tech world, almost every system is run to some extent by computers. This is from the department of traffic control, to spaceships, and from the students studying medicine to a patient on a life support machine (Hamlet, 1994). The importance of software testing can therefore not be overemphasized. A small problem in the system as a result of bugs can lead to devastating effects on the economy of a country and even cost some people their lives (Hamlet, 1994).

With this in mind quality becomes a major component of software testing. Because the programmers are prone to human error, a quality system can be obtained after several trials and errors by the process of continuous debugging. Apart from debugging, a system is to be tested for its compatibility. This is when the programmers and the various stakeholders sit down to chose a system that works best for them. This will be done by considering various human and environmental factors that the organization is most comfortable with. The organization should also in the process choose from a collection of models for a new system the model that will best meet their needs with the least amount of defects (James, & Bret, 2001). 

To meet these quality check requirements, a step by step software testing is necessary. The stages in software development which is analyzed in the system life cycle will determine the stages in software testing (John, Philip, & David, 1988). The first stage will be determining requirements and ensuring they meet the stipulated needs for which they are required. It should be noted that system development life cycle goes hand in hand with product cycle. The software is only part of the entire product which makes up a system with several parts (John, Philip, & David, 1988).

All models for system development comprise a point of software testing. Some of these models include the V model and the waterfall model. All models are divided into several stages. These stages include the software requirements stage; software design stage; software implementation stage; software verification stage and finally maintenance. Software testing is done in nearly every stage of system development (Dustin, et al, 1999).

After the software has gone through the entire software development life cycle, it joins the overall system. This overall system now forms the end product of system development life cycle (John, Philip, & David, 1988).

Before system development begins, stakeholders need to take note of the financial requirements. The bulk of the monetary input in the system development life cycle goes to the software testing stage (Cornett, 1996). This is because, due to quality requirements, the testing process is done over and again. A form of repeated testing is called regression testing. Regression testing considerably cuts the cost of testing because it is less risky (Dustin, et al, 1999). Regression testing involves going through the prior tests that produced good results. This is to check on the likelihood that an error occurring is due to a recent change made and trying to correct the problem. Automating the software test made the process even simple and less time consuming (Dustin, et al, 1999).

All processes of software testing should be done the soonest time possible upon the onset of a system development life cycle. The earlier the software checking begins, the better the system becomes in fulfilling its intended functions (Black, 2008).

System development life cycle models such as V model and waterfall model are quite delicate. The two models have to start on successfully and throughout the process, caution should be taken to avoid any coincidental errors, otherwise the entire system may malfunction (James, & Bret, 2001, & Dustin, et al, 1999).

For all models involved in the system development life cycle, automated testing is very crucial. This will lead to system checking being done repeatedly. To make the checking easier, AdaTEST and cantata have proved to be very useful (James, & Bret, 2001).

As noted earlier, bugs are small or major defects that can cause a real havoc in a system. It is also not possible to detect all the bugs found in a given system. The simplest system can take several years of debugging which is not practical oriented (Kaner, 2006). This is because most systems are extremely complex and thorough debugging can run into millions of years for the program to be 100% effective (Mark, & Dorothy, 2008). System defects occur mostly because of human error which should not be mistaken for the programmers being careless. Software programs and applications are extremely complex which makes the bugs presented by them also equally complex. To add to the complexity is the fact that a system is operating under environmental influence. Human factors, structural and infrastructure all play a unique role in speeding or slowing down the process of debugging (Kaner, 2006, & Oard, & Kim, 2001).

Most software are digital systems which tend to differ remarkable from the physical system. While the physical system is rigid, the software system is often extremely malleable and delicate. In other words, it is much easier to find a problem in a physical system because one can easily predict the cause and source of the problem than to do the same in software system (James, & Bret, 2001). Due to this complexity, a system analyst cannot set out to discover all the possible causes of errors and malfunction, as this will take them a lot of time, even a lifetime of no success. These constitute some of the challenges and risk factors in implementing a new system; there is no total assurance of perfect performance satisfaction to all (James, & Bret, 2001). 

Another difficulty arising from software automated testing is because of the software systems being completely dynamic (Jack, & Nguyen, 1999).  A software system can have a myriad of problems from a single cause or a single problem from several causes which are hard to track down. This causes two tests done on a system at different times, yielding differing results. The aspect change can be affecting several functions in a system and these in turn changes when it is affected. On the other hand, the single complication can be responsible for affecting various features in the system, making it hard to point out the source and destination of the problem (James, & Bret, 2001).

A good illustration for the complexity and the dynamic nature of a software system is the pesticide paradox. One can be having a plantation that is attacked by different types of pests. The person does not have the time or the patience to look carefully into each pest and find out if they are similar or difference. With the impatience to get rid of the pests, they apply a pesticide which can only harm and kill some pests while others are either immune to the pesticide or develop some sort of resistance, which making the job more complex with each attempt to solve the issue. The final analysis is that this system can only be able to escape into safety by taking time to find out exactly which pests are nuisances and the trying to eradicate them. This will take time, patience and efforts.

Clients want a system that will meet their unique day to day needs, and if there has to be problems to be incurred while using the system, they have to be as minimal as possible. Software automated testing was developed to make this process of incessantly reducing software problems easier and very fast (Black, 2008).

Software complexity, “the complexity of current software applications” can be difficult to comprehend for anyone without experience in modern day software development. A “multi-tier distributed systems”, applications utilizing multiple local and remote web services, data communications, enormous relational databases, security complexities, and sheer size of applications have all contributed to the exponential growth in software complexity (Black, 2008). 

In changing requirements, the end-user may not understand the effects of changes. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In “some fast-changing business environments”, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control. Automating the tests is indispensably the best way to go.

References

Anick, P. “Using terminological feedback for search refinement”: A log-based study. In: SIGIR ’03. “Proceedings of the 26th annual international ACM SIGIR 

Conference on Research and development in information retrieval”. New York,  NY, USA, ACM Press. 2003.

Beizer, B. “Software Testing Techniques”.2nd Ed. New York: Van Nostrand Reinhold. . 1990. 21-430. 

Black, R. “Advanced Software Testing, Vol 2”. “Guide to the ISTQB Advanced  Certification as an Advanced Test Manager”. Santa Barbara: Rocky Nook  Publisher. 2008.

Cornett, S. “Code Coverage Analysis”. USA: Carnegie Mellon University. 1996.

Dustin E, et al.: “Automated Software Testing”. Addison Wesley, 1999.

Gelperin, D. "The Growth of Software Testing". CACM 31 (6). 1988. ISSN 0001-0782. 

Harry, K.N, & Timothy, L.S. “Towards target-level testing and debugging tools for Embedded software”. Conference proceedings on TRI-Ada, 93. 1993. 288-124.

Hamlet, D. “Foundations of software testing: dependability theory”. “Proceedings of the Second ACM SIGSOFT symposium on Foundations of software engineering”. 1994. 128 – 139. 

Hetzel, B.). "The Growth of Software Testing". CACM 31 (6). 1988. ISSN 0001-0782. 

IEEE.  “IEEE Standard Computer Dictionary, a Compilation of IEEE Standard Computer Glossaries”. New York: IEEE.1990. 

Jack F.H, & Nguyen, H.Q. “Testing Computer Software”.2nd Ed. New York: John Wiley and Sons, Inc.1999. 480. 

James, B & Bret, P.  “Lessons Learned in Software Testing, A Context-Driven Approach”. Wiley’s. 2001.13- 4. 

John, D.V, Philip K.M, & David G.D. “The Ballista Software Robustness Testing Service”. Proceedings of TCS’99. Washington DC.1988.

Kaner, C. “Quality Assurance Institute Worldwide Annual Software Testing Conference”. Florida Institute of Technology. Orlando, FL, November 2006. 

Kropp, N. P.; Koopman, P. J.; Siewiorek, D. P. Automated robustness testing of off-the-Shelf software components. Twenty-eighth Annual International Symposium on 

Fault-Tolerant Computing .Cat. No.98CB36224.Morita, M.F, & Shinoda, Y.P. “Information filtering based on user behavior Analysis and best match text retrieval”. In Proceedings of the 17th Annual 

International ACM SIGIR Conference on Research and Development in Information, Ireland. 1994. 272-281.

Oard, D. W., and Kim, J. “Modeling information content using observable Behavior”. Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology, USA, 2001. 38-45.

Philip K.F, John S.L, Christopher D.R, & Daniel S.J. “Comparing Operating Systems Using Robustness Benchmarks”. 16th IEEE Symposium on Reliable Distributed  Systems. Durham, NC: Oct 22-24. 1997. 72-79.

Mark, F & Dorothy, G.S.  “Software Test Automation”. ACM Press - Addison-Wesley. 1999.Savenkov, R. “How to Become a Software Tester”. Roman Savenkov Consulting.  2008.159. 

Robert, V. “Testing Object-Oriented Systems, Objects, Patterns, and Tools”. Addison  Wesley Professional. 1999. 45-57. 

Smith, C. “Performance Engineering of Software Systems”. Addison-Wesley. 1990.

Yang, M.K. & Chao, AP. “Reliability-estimation and stopping-rules for software testing, Based on repeated appearances of bugs”. IEEE Transactions on Reliability, vo.2. 

1995. 315-21, 

ACKNOWLEDMENT

First and foremost, I take this salient opportunity to acknowledge my family members, relatives and friends for their unlimited supports which made this dream come true. Without their fruitful compassions and encouragements, I wouldn’t have possibly crossed the bridge by myself. I give lots of thanks to everyone else who either directly or indirectly took part in the preparation events which amounted to the success of my wedding. I also thank God for being so kind and considerate, making me feel complete and comfortable in life with the companion He chose for me. 

Table of Contents

Chapter 1………………………………………………………………… 5 

The Meeting……………………………………………………………… 5

Chapter 2……………………………………………………………………6

Friendship grooves on………………………………………………….. 6

Chapter 3…………………………………………………………………. 8

The First Kiss…………………………………………………………….. 8

Chapter 4…………………………………………………………………...8

The Day of Proposal …………………………………………………… 9

Chapter 5…………………………………………………………………. 9

Preparations kick off…………………………………………………… 9

Chapter 6…………………………………………………………………...9

The Wedding Dress……………………………………………………… 9

Chapter 7…………………………………………………………………. 11

The Wedding Day…………………………………………………… ….. 11

Chapter 8…………………………………………………………………...11

Honeymoon……………………………………………………………… 12

Conclusion (Lesions leant)…………………………………………….. 12

Glossary………………………………………………………………….. 13

References…………………………………………………………………14

Abstract

The Wedding. To make it more appealing and fulfilling to my own sanity, I will refer to it as “The dream of my life”, because frankly it was. But first, I pledge to explain what it means when talking of “Wedding”. It’s rather a rare term in the public domain, and a scarce commodity in the modern society. So, anybody may have a reason to declare he or she knows nothing to do with “the Wedding”, or precisely, what it means anyway. According to Howard Vicky, Wedding simply means “a ceremony in which two people are united in marriage” (Howard, 2006, 34, American Weddings and the Business of Tradition). However, when it comes to “wedding ceremonies”, there could be more than meets the eye. We may be obliged to say much on things like culture of weddings, wedding varieties and so forth. Unfortunately, these are not to be addressed here, as they fall out of the scope of the objective. This is an autobiography in which I opt to jot down a few things regarding the most memorable moments in my life.  It`s everything to do with my wedding. How I met him, the preparations, the big day and life thereafter have been tackled. I hope you enjoy it too.

Chapter 1

The Meeting

The world we live in is full of disappointments and disgruntled dreams. A few people live to witness their dreams become realities. I am very contented to be among the few living witnesses of accomplished dreams. I wasn’t a dreamer like Joseph in the bible, neither was I possessed of dreaming! I only had a few important dreams like every other complete woman. We dream of living a decent life, life full of tearful jubilations and soothing smiles. Most importantly, though, we dream of meeting the right partners.

Well, it was during those old schooling days, the days I was still very young, active and committed to succeed in life. I was schooling at “the Educational Opportunity Center” acronym as SUNY, situated in Manhattan, New York. That was back in 1995.Little did I know we were both attending the evening English classes where we pursued English as a second language.

Chapter 2
Friendship Grooves on

As time went by, came the hot winter season again. During this season I often had problem of bleeding nose, most common when the temperatures were high and when it was hot. So one particular day, my bothersome nose started. Two drops of blood stains on my notebook and Michael sitting right next to me got surprisingly perturbed. You just got it right, Michael was his name, a name I became so much used to. He was so gentle a man, offering me his white handkerchief to clean up the bleed.

When Mr. Jones, our English teacher, noticed what was happening to me, he requested for a volunteer who could accompany me to my apartment. Michael was once again there to help me out. He did not hesitate to accompany me home. He took me up to my door stairs, living me to have a rest. Fortunately when we got there, the bleeding had already ceased. 

Twenty minutes after Michael left, the phone was ringing. He was the one calling, reason, to find out how I was progressing on. I truly felt in love, and at least, I was assured of myself that there was someone who did care for me that much. In a couple of days, I resumed on the evening classes. There was more of him near me, at least a few days every Week. We had certainly become great friends.

Chapter 3

The First Kiss

A few days later, our class had a trip to the train stations. We were clustered up, so that students walked in groups. This was because the streets of Harlem had become unsafe during that point in time, and perhaps, it wasn’t safe walking alone along the streets. Unlike me, most of my classmates opted for train Number 2 A and D, where as I was seemingly going to be alone in train Number 6; but was I? No I wasn’t going to be all by myself. Michael yet again accompanied me, this time round; he was with Tuipate, one of my other good friends. Tuipate, however, left us after a few miles of walk, as Michael and I joined train Number 6.

 I didn’t know, but something was going to happen to me for the first time in my life. Something I have never, can never and will never forget. And every time I flash back at the sweet memories, it is like in happened to me a couple of hours ago. The first kiss that became his key to my heart. There may be no appropriate words to describe how it happened, but let me just try sharing the passionate feelings.

He suddenly drew closer and so intimate, to the extent that I got nervous already. Closer and closer he moved, as his lips nearly touched on mine, but with just a slight miss of the daring kiss. The next attempt, he caught me off-guard, showering my stomach with butterflies, winning my marrows, granting me a short-lived lift from earth to the paradise of love, and before I could realize, I was back to my senses.

 I nearly slapped him, for the fact that we were right at the exit of Dela School in Harlem, 125th street, a few miles away from the Evangelical Episcopal Church. Ironically; he could read it all over my face that I liked it. My body trembled in trepidation. It was not appropriate for me kissing in the public, fearing the police could even be on our necks for breaching against public morality.

Chapter 4

The Day of Proposal 

This was no doubt bound to come, only that I was not sure about it. I first informed Michael I will be waiting to warmly welcome his family at my place. So that day, I planed to prepare all sorts of meals for his family. This was before he intervened into my plans, assuring me there was no need to be bothered in cooking anything for them, as they were going to come with everything necessary for the occasion. 

I went ahead to prepare a simple meal for the guests, the family, relatives and friends, albeit after Michael maintained they will come a long with an already prepared meal. This was because I too wanted to impress his parents. I didn’t know this occasion was to become our engagement day.

The moment had actually come .Michael approached me, and went straight down to his bended knees right at my feet carrying a ring. I was emotionally overwhelmed, shedding the tears of joy. He held my hands tightly, kissed my palm and finally, formally indicted our engagement. After the engagement, our lives generally turned a new leaf. It was life full of love, care, affection and romance. Our dating period was nearly reaching its pick.

Chapter 5

Preparations kick off

We were not in a hurry of fixing up a wedding. During our courtship period, Michael took a vocation to Dominican Republic, a Week before the Easter of 1999.I did not accompany him as he wished because I was by then pursuing my part-time studies and working at the same time, so my schedule was really tight.

While he was away in Dominican Republic, we communicated via the phone, at least every night. It was during one of these series of night conversations that Michael openly proposed to marry me. I had to think it over. Fortunately for me, I couldn’t get a reason whatsoever that could make me turn down his marriage proposal. It was on the 2nd of April, 1999, a date I have never forgotten for its very peculiarity. We went ahead to tackle on the wedding issue. Apparently, it was going to be one of such weddings I always for so long fantasized.

I departed for Santiago Domingo two days after, arriving there on the 4th of April, 1999.The next day, that was, on the 5th of April,1999,I left for Michael`s hometown in Azua. The mission was to arrange for the documents that were necessary for our upcoming wedding.

Chapter 6

The Wedding Dress

On the 6th of April, 1999, we embarked on the search for a befitting wedding gown. Our search in the village boutiques was not fruitful, so we had to try other means. We had to travel to other parts of the town in search of “the golden gown”. Michael moved to Santiago de Caballeros while I took to search at the capital where I obtained the right dress plus a few other necessities. Michael on the other hand had to pick his identity card needed at the Civil Status Registry Office for the legal approval of our marriage. Too bad that he didn’t get it at Santiago de Caballeros, forcing him to move to the capital where he obtained an emergency Identity card.

Luckily enough, all the documents we provided at the Civil Status Registry Office got approved by 9th, April, 1999.Thanks to the support and good wishes of our family members and friends, more so my brother China who was during this point in time with my Michael, while he was rushing up and down searching for his identity card and other documents needed for the approval of the wedding. My friends, Rosario Martinez and Lade Mendez, you are awesome friends. Thanks to the role you two played in making my wedding such colorful. Your decorations did wonders to the entire ceremony.

Chapter 7

The Wedding Day

The long anticipated day was finally here! Gloria was on my case. So religious she was, giving me the best of advices I had never heard before. On the 10th of April, 1999, I was up at the crack of dawn. It was natural that I had to be nervous, but they chilled me dawn, I mean, my caring fiends and family members. And as the time was approaching, there was no more time to waste. So I took my bath, had a fairly light breakfast and they prepared me sufficiently for the occasion.

The wedding was scheduled to begin at 10.00 O’clock that morning. But it was a bit late, as the stylist was still busy working on my hair to match to the standard of a bride, and to impress the bridegroom. She was being assisted my mother.Pormi and father were supposed to hand in my husband which didn’t turn up pretty well, causing delays to the wedding and of getting me quite worried. 

The wedding commenced later from 12.00 noon at Luprisma Restaurant in the City of Azua Compostela, Dominica Republic. Thanks to all the families, friends and relatives who were available that day. I got married in a white long gown, a veil and a crown. There were lot to eat and drink, and it was jubilation to all. What followed was a series of fun. I was later to be informed that the big party lasted until very late in the mid-night. Everyone left satisfied, happy, and drunk to the brim.

 Chapter 8

Honeymoon

My husband Michael and I chose to have our honeymoon at the Salinas hotel, Bani R.D, where we spent some memorable moments. Since I had to report back to work and to resume my studies back in New York, we had to regrettably cut short the honeymoon. Came 12th of April, 1999, I travelled back to New York, while Michael my husband left for Colorado. 

Conclusion

From my wedding experience, I learnt a few critical lessons in this life. First, I learnt that people do not really plan for their marriages, neither do they know who they will one day get married to if they are wishing to be married. Our fates are destined differently rather, but one thing is for sure, it’s sweetest when it is spontaneous.

Glossary

The Dream of my life: 

Is the fulfillment of my wedding which transformed my life. My wedding constitutes the big dream.

Wedding: 

Is simply described as “a ceremony in which two people are united in marriage”. Wedding ceremonies vary from culture to culture and from one tribe to another, other factors withstanding.

Bibliography

Howard, V. “American Weddings and the Business of Tradition”. University of 

Pennsylvania Press, Philadelphia. 2006. 34.

Both Du Bois and Alaine Locke were concerned as to whether the American educational system, as a functional framework, encouraged African American students,  just like the white students to seek acknowledgment outside of themselves, that will in the end lead to a sense of Ontological and epistemological worthlessness.  For Du Bois, his primary concern in regard to his political platform was the status of black people as being citizens of the United States and overseas, and how that status is tied not only to education but also to issues of economics, politics, and popular culture (Bois, 1935). Because he was fully aware that the modern intellectual tradition argued that a human being had the potential to be educated and/or knowing, or to reason, questions concentrating on the purpose of educating black people in twentieth-century America would become crucial for Du Bois (Bois, 1935). However, white supremacy's marginalization creates a schizophrenic state for black people, which Du Bois viewed it as being a state of double awareness; this is the inability of a person to be completely conscious within himself or herself is referred to as double consciousness. Rather, the person sees himself through the lens of the prevailing culture, for which the black students would find themselves in. They had to learn to be white and live in the white culture if they were to also be educated (Bois, 1935). The conflict between two beings within one person is caused by double consciousness: the first need to define oneself, and the second want to integrate into the dominant society, which dismisses the individual's actual self as inferior. In other words, the way of being, way of individual thoughts and way of viewing the world. For black people, double consciousness emphasizes racist education system as that which makes one feel both American and not American, as well as the conflict between being African and American. Double awareness, on the other hand, is not an individual term that only black people experience.

In the same line of thought, Alain Locke has made a significant contribution to educational theory, based on the black student experiences (Locke, 1935). Locke's educational system mirrored his belief that excluded communities must preserve their cultural identity while affirming their humanity through appeals to the universal. Unlike many modern proponents of multiculturalism, Locke proposes a solution to reconcile plurality and universalism by pointing to the third viewpoint in regard to the universalized common-denominator humanity. In other words, Locke maintained that cultural and/or racial differences should be regarded in the manner which the universal application and value to the entire human experience in both his aesthetic vision and pedagogical concept is exercised.  If one could ever generalize the implemented belief that no one nation and no one race is in the position or will be in in the position to dominate the earth, then America would have broken the intellectual backbone of prejudice and certainly, in terms of education, will have laid an academic foundation for effective democracy; according to Locke.


Looking for History Assignments? ORDER HERE


In my view, Both Lock and Dubois are right, since the simultaneous commitments if educators to diversity and the declaration of universal human rights principles can be reconciled in the  philosophy of education. This endeavor to bridge the gap amongst universalism and plurality influenced his sophisticated educational philosophy (Locke, 1935). A motivating element underlying Locke and Dubois educational concept and the shift of the Old Negro to the new African American student, or from issue to rich and malleable personality. Locke and Dubois encourages us to reconsider how we educate people of color, particularly African-Americans. Although Locke's primary interest was in the aesthetic, he was well aware — as indicated by the numerous types of studies he taught at Howard — that African Americans required a liberal and innovative pedagogical approach rather than a strictly defined education (Locke, 1935). Unlike Du Bois  however, Locke believed that the Negro did not require a single sort of education, but in both forms, whether political or industrial.


References

Bois, W. E. (1935). Does the Negro need separate schools? The Journal of Negro Education, 4(3), 328. https://doi.org/10.2307/2291871

Locke, A. (1935). The dilemma of segregation. The Journal of Negro Education, 4(3), 406. https://doi.org/10.2307/2291875

Abstract

The relationship between science and religion has been a substantial theme that has raged lots of debates and arguments among different scholars and philosophers. There are actually two opposing groups of advocates and believers, those who hold that religion and science are interdependent on each other and therefore one cannot do without the other. This was the stand of Albert Einstein who argued “Science without religion is lame and religion without science is blind” (Brook, J., 1991, Historical perspectives on science and religion). Other philosophers are in disapproval of such claims as interrelationship between religion and science. Take the case of John Williams and Stephen Gound, these two grossly criticized and refuted the views of Albert Einstein and his psycho fans. According to John Williams, the connection between religion and science remains a matter of “conflict of thesis” that is yet to settle properly into a universal consensus of reasons (Drees, W., 1996, Religion, science and naturalism). Ian Barbour is rather neutral in his judgment of these two universal concepts. He, however, argues more or less in line with. Albert Einstein, and to some extent, in agreement with Einstein (Barbour., G., 1997, Religion and Science: Historical and contemporary issues).And like some of us, Pascal is also too pessimistic about how the ability of human reason can lead to true believes (Barbour., G., 1997).This paper reassesses and critically analyses the comparative views of Ian Barbour and the pessimism of Pascal about religion and science. How does Ian Barbour argue about religion and science? To what extent are the arguments of Pascal about truth, reason and faith convincing or justifiable? These are among the few questions that this essay seeks to address more analytically.

Introduction

Imagining the relationships that can exist between religion and science may not be easy as such. Literatures discussing on this subject matter, i.e. the relationship between religion and science, are overwhelming in volumes. Going into details by considering each and every argument from all the literatures is nearly impossible and therefore outside the scope of this essay. This paper narrows it down to the arguments of Ian Barbour. Barbour’s four fold paradigm of trying to help us understand the connection between religion and science is given a salient emphasis in this piece of work. The paper extends a bit further to check on what Pascal has to say about the same. Pascal’s pessimistic views on truth, faith, reason, heart and believes have all drawn attention on this essay.

Barbour’s four-fold schema on religion and Science

In his popular four-fold paradigm, Ian Barbour states and analyzes a range of feasible scenarios explaining the relationship between religion and science. The four factors constituting the Barbour’s typology includes, “conflict, independence, dialogue and integration” (Barbour., G., 1997, Religion and Science: Historical and contemporary issues).

Ian Barbour has been deemed by many scholars as the one author that has exceptionally outlined the relationship between religion and science. The followers of Barbour’s believes and arguments about religion and science have commonly been referred to as Barbourians (Brook, J., 1991). 

Barbour’s arguments about religion and science are concentrated on a four- fold schema, encompassing all the views and assertions that have been made on this subject matter by other scholars (Barbour., G., 1997).

According to Barbour, the aspect of conflict between religion and science reflects on the natural co-existence of the two subjects. In his view of independence between religion and science, Barbour posits that the two fields operate in separate but equal domains, and that “they co-exist without interacting with one another”. Stressing on the similarities between science and religion, Barbour points at dialogue. He contends that even though religion and science remain distinct enterprises, the aspect of dialogue makes them similar in terms of the presuppositions made in both cases, the methodologies applied, and concepts utilized” (Barbour., G., 1997). If dialogue portrays the similarity between religion and science, independency of the two manifests the possible differences (Barbour., G., 1997).His last concept in the four-fold schema is on the integration, in which case Barbour holds there is an attempt to merge the two fields into a single entity (Barbour., G., 1997, Religion and Science: Historical and contemporary issues).

To my opinion, Ian Barbour’s stand on the relationship between religion and science is rather too general and prone to confliction of concepts and ambiguity of conceptions. By amalgamating all into a unit, Barbour’s four-fold schema only seeks to address the problem of many unsettled views about this matter i.e. the connection between religion and science. He doesn’t state clearly how his concept of integration harmonizes the co-existence of these two fields into a single entity, as this is completely parallel to the concept of independency and that of conflict. The concept of dialogue is yet another contentious perception in this relationship. As much as dialogue brings about similarity between the two, Barbour is not apparent as to why the same dialogue factor will not bring about their differences. In the same vein, how the aspect of independency fails to bring about similarities in the relationship is not forthcoming. These single sided assumptions make the four-fold schema not convincing and justifiable.

Pascal is best known for his philosophical quotes and for being so defensive to the Western Christian philosophy. In his arguments on the limits of human rationality in philosophy, Pascal asserts that “the reason did not alone satisfy all the functions of human philosophy”. In one of his works, he states that “The heart has its reasons of which reason knows nothing”. As per his claims, the heart seems to be independent from the reason, but the reason needs the heart, as there is no pure rationalism without either of the two (Drees, W., 1996).

Pascal states further that “We know the truth not only through our reason but also through our heart. It is through the latter that we know first principles, and reason, which has nothing to do with it, tries in vain to refute them”(Drees, W., 1996).


Looking for a Religion and Science Essay ?ORDER HERE


Pascal believed that the human heart is in mutual relationship with the universal being, which according to him constitutes God and th

e self. More precisely, God is the source of human being. He elaborates further that “it is the heart which perceives God and not the reason…faith is God perceived by the heart, not reason”. From Pascal’s presumptions therefore, faith springs from love and emotion. He suggests that a sincere believer should “seek to hold onto the initial emotion of the heart, and use that faith and love as springboards from their source of faith and believe, not seek to find belief in reason  after the fact…emotions the first cause, rationality proceeds after the fact” (Drees, W., 1996, Religion, Science and Naturalism).

If Pascal’s argument is something to go by, then science which basically deals with facts as opposed to theories or  assumptions, will tend to be the result of faith and believe, both influenced by heart and reason, and God being the ultimate truth.

The human mind is an organ composed of “multiple systems which guides the human understanding and reaction to different realms” (Drees, W., 1996).To Pascal, none of the multiple systems may directly link to religious concepts which he regard as “supernatural concepts defined their violations of some, but not all, normal domain-level expectations” (Drees, W., 1996).He contends religious concepts “prey upon common intuitions about misfortunes that do not matter much to people’s daily lives. 

The human mind is thus exposed to many influential factors, among them religion, misfortunes and culture. All these, as portrayed by Pascal, will determine what the mind believes in and what people simply ignore or just take for granted for the lack of adequate exposure or experience to them and/or with them. Only those people so much exposed to religious concepts would accept the assertion that there are angels in heaven and fire that will burn sinners (Drees, W., 1996, Religion, Science and Naturalism).. 

Those who are not exposed will find it difficult holding the same believe, as they do no have evidence yet as to why they may have to believe in the same way of thinking. The mind sets can thus give rise to certain believes. Pascal’s views are somewhat convincing in this case.

References

Barbour, G. Religion and Science: Historical and contemporary issues. San 

Francisco: HarperSan- Francisco.1997

Brook, J. Science and Religion: Some Historical perspectives. Cambridge History of 

Science: Cambridge University Press.1991

Drees, W. Religion, Science and Naturalism. Cambridge: Cambridge University 

Press.1996.

More Sample Writings