Practice Free CTFL_SYLL_4.0 Exam Online Questions
A typical objective of testing is to ensure that:
- A . testing is used to drive the development of a software
- B . a software has been tested using a combination of test techniques
- C . there are no defects in a software that is about to be released
- D . a software has been properly covered
B
Explanation:
This answer is correct because a typical objective of testing is to ensure that a software has been
tested using a combination of test techniques, such as black-box, white-box, or experience-based techniques, that are appropriate for the test objectives, test levels, and test types. Testing using a combination of test techniques can increase the effectiveness and efficiency of testing, as different techniques can target different aspects of the software quality, such as functionality, usability, performance, security, reliability, etc. Testing using a combination of test techniques can also reduce the risk of missing defects that could be detected by one technique but not by another.
Reference: ISTQB Foundation Level Syllabus v4.0, Section 2.3.1.1, Section 2.3.2
A number of characteristics are given for impact of SDLC on the testing effort.
i. Finishing of requirements review leading to test analysis
ii. Both – static and dynamic testing performed at unit testing level
iii. Frequent regression testing may need to be performed
iv. Extensive product documentation
v. More use of exploratory testing
Which of the following statements is MOST correct?
- A . i, ii and iii are characteristics of sequential models; iv and v are characteristics of iterative and incremental models
- B . i and iv are characteristics of sequential models; ii, iii and v are characteristics of iterative and incremental models
- C . ii and iv are characteristics of sequential models; i, iii and v are characteristics of iterative and incremental models
- D . iii and iv are characteristics of sequential models and i, ii and v are characteristics of iterative and incremental models
B
Explanation:
Sequential models, such as the Waterfall model, typically involve completing the requirements review before moving on to test analysis, as well as producing extensive product documentation (i and iv). Iterative and incremental models, like Agile and Spiral models, often involve both static and dynamic testing at the unit testing level, frequent regression testing due to continuous integration and changes, and more use of exploratory testing (ii, iii, and v).
Reference: ISTQB CTFL Syllabus V4.0, Section 2.1 on software development lifecycle models and their impact on testing, which discusses the characteristics and testing practices of different lifecycle models.
Which of the following statements about estimation of the test effort is WRONG?
- A . Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
- B . Effort estimate can be inaccurate because the quality of the product under tests is not known.
- C . Effort estimate depends on the budget of the project.
- D . Experience based estimation is one of the estimation techniques.
C
Explanation:
Effort estimate does not depend on the budget of the project, but rather on the scope, complexity, and quality of the software product and the testing activities1. Budget is a constraint that may affect the feasibility and accuracy of the effort estimate, but it is not a factor that determines the effort estimate. Effort estimate is the amount of work required to complete the testing activities, measured in terms of person-hours, person-days, or person-months2.
The other options are correct because:
Which of the following statements is true?
- A . A defect does not always produce a failure, while a bug always produces a failure
- B . A defect may cause a failure which, when occurring, always causes an error
- C . Failures can be caused by defects, but also by environmental conditions
- D . Bugs are defects found during component testing, while failures are defects found at higher test levels
C
Explanation:
Failures can be caused by defects, but also by environmental conditions. A failure is an event in which the software system does not perform a required function or performs a function incorrectly, according to the expected behavior. A defect is a flaw in the software system or a deviation from the requirements or the specifications, that may cause a failure. However, not all failures are caused by defects, as some failures may be caused by environmental conditions, such as hardware malfunctions, network interruptions, power outages, incompatible configurations, etc. Environmental conditions are factors that affect the operation of the software system, but are not part of the software system itself.
The other statements are false, because:
A defect does not always produce a failure, while a bug always produces a failure. This statement is false, because a defect may or may not produce a failure, depending on the inputs, the outputs, the states, or the scenarios of the software system, and a bug is just another term for a defect, so it has the same possibility of producing a failure as a defect.
For example, a defect in a rarely used feature or a hidden branch of the code may never produce a failure, while a defect in a frequently used feature or a critical path of the code may produce a failure often. A bug is not a different concept from a defect, but rather a synonym or a colloquial term for a defect, so it has the same definition and implications as a defect.
A defect may cause a failure which, when occurring, always causes an error. This statement is false, because an error is not a consequence of a failure, but rather a cause of a defect. An error is a human action or a mistake that produces a defect in the software system, such as a typo, a logic flaw, a requirement misunderstanding, etc. An error is not observable in the software system, but rather in the human mind or the human work products, such as the code, the design, the documentation, etc. A failure is not a cause of an error, but rather a result of a defect, which is a result of an error.
For example, an error in the code may cause a defect in the software system, which may cause a failure in the software behavior.
Bugs are defects found during component testing, while failures are defects found at higher test levels. This statement is false, because bugs and failures are not different types of defects, but rather different terms for defects and their manifestations. As mentioned before, bugs are just another word for defects, and failures are the events in which the software system does not perform as expected due to defects. Bugs and failures can be found at any test level, not only at component testing or higher test levels. Test levels are the stages of testing that correspond to the levels of
integration of the software system, such as component testing, integration testing, system testing, and acceptance testing. Defects and failures can occur and be detected at any test level, depending on the test objectives, the test basis, the test techniques, and the test environment.
Reference: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.1.2, Testing and Quality1 ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.1, Testing Principles1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.3.1, Testing in Software Development Lifecycles1
ISTQB® Glossary of Testing Terms v4.0, Failure, Defect, Bug, Environmental Condition, Error, Test Level2
The tests at the bottom layer of the test pyramid:
- A . run faster than the tests at the top layer of the pyramid
- B . cover larger pieces of functionalities than the tests at the top layer of the pyramid
- C . are defined as ‘Ul Tests’ or ‘End-To-End tests’ in the different models of the pyramid
- D . are unscripted tests produced by experience-based test techniques
A
Explanation:
The tests at the bottom layer of the test pyramid run faster than the tests at the top layer of the pyramid because they are more focused, isolated, and atomic. They usually test individual units or components of the software system, such as classes, methods, or functions. They are also easier to maintain and execute, as they have fewer dependencies and interactions with other parts of the system. The tests at the top layer of the test pyramid, on the other hand, are slower because they cover larger pieces of functionalities, such as user interfaces, workflows, or end-to-end scenarios. They also have more dependencies and interactions with other systems, such as databases, networks, or external services. They are more complex and costly to maintain and execute, as they require more setup and teardown procedures, test data, and test environments.
Reference: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 3.2.1, Test Pyramid1 ISTQB® Glossary of Testing Terms v4.0, Test Pyramid2
The tests at the bottom layer of the test pyramid:
- A . run faster than the tests at the top layer of the pyramid
- B . cover larger pieces of functionalities than the tests at the top layer of the pyramid
- C . are defined as ‘Ul Tests’ or ‘End-To-End tests’ in the different models of the pyramid
- D . are unscripted tests produced by experience-based test techniques
A
Explanation:
The tests at the bottom layer of the test pyramid run faster than the tests at the top layer of the pyramid because they are more focused, isolated, and atomic. They usually test individual units or components of the software system, such as classes, methods, or functions. They are also easier to maintain and execute, as they have fewer dependencies and interactions with other parts of the system. The tests at the top layer of the test pyramid, on the other hand, are slower because they cover larger pieces of functionalities, such as user interfaces, workflows, or end-to-end scenarios. They also have more dependencies and interactions with other systems, such as databases, networks, or external services. They are more complex and costly to maintain and execute, as they require more setup and teardown procedures, test data, and test environments.
Reference: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 3.2.1, Test Pyramid1 ISTQB® Glossary of Testing Terms v4.0, Test Pyramid2
Which ONE of the following options MOST ACCURATELY describes the activities of “testing” and “debugging”?
- A . Testing triggers a failure that is caused by a defect in the software, whereas debugging is concerned with finding the causes of this failure (defects), analyzing these causes, and eliminating them.
- B . Testing triggers a failure that is caused by a defect in the software, whereas debugging is concerned with finding the causes of this failure (defects), analyzing these causes, and reproducing them.
- C . Testing identifies a defect that is caused by an error in the software, whereas debugging is concerned with finding the causes of this defect, analyzing these causes, and eliminating them.
- D . Testing triggers a defect that is caused by an error in the software, whereas debugging is concerned with finding the causes of this defect, analyzing these causes, and eliminating them.
A
Explanation:
Testing and debugging are separate but related activities. Testing executes the software to identify failures that result from defects (A). Debugging is the developer’s responsibility and involves finding the cause of a failure (defect), analyzing it, and fixing the defect. The ISTQB syllabus explicitly differentiates these activities. Testing does not modify the software, whereas debugging does.
Reference: ISTQB CTFL v4.0 Syllabus, Section 1.1.2 C Testing and Debugging
Which ONE of the following options MOST ACCURATELY describes the activities of “testing” and “debugging”?
- A . Testing triggers a failure that is caused by a defect in the software, whereas debugging is concerned with finding the causes of this failure (defects), analyzing these causes, and eliminating them.
- B . Testing triggers a failure that is caused by a defect in the software, whereas debugging is concerned with finding the causes of this failure (defects), analyzing these causes, and reproducing them.
- C . Testing identifies a defect that is caused by an error in the software, whereas debugging is concerned with finding the causes of this defect, analyzing these causes, and eliminating them.
- D . Testing triggers a defect that is caused by an error in the software, whereas debugging is concerned with finding the causes of this defect, analyzing these causes, and eliminating them.
A
Explanation:
Testing and debugging are separate but related activities. Testing executes the software to identify failures that result from defects (A). Debugging is the developer’s responsibility and involves finding the cause of a failure (defect), analyzing it, and fixing the defect. The ISTQB syllabus explicitly differentiates these activities. Testing does not modify the software, whereas debugging does.
Reference: ISTQB CTFL v4.0 Syllabus, Section 1.1.2 C Testing and Debugging
Which of the following statements is true?
- A . Experience-based test techniques rely on the experience of testers to identify the root causes of defects found by black-box test techniques
- B . Some of the most common test basis used by white-box test techniques include user stories, use cases and business processes
- C . Experience-based test techniques are often useful to detect hidden defects that have not been targeted by black-box test techniques
- D . The primary goal of experience-based test techniques is to design test cases that can be easily automated using a GUI-based test automation tool
C
Explanation:
Experience-based test techniques are test design techniques that rely on the experience, knowledge, intuition, and creativity of the testers to identify and execute test cases that are likely to find defects in the software system. Experience-based test techniques are often useful to detect hidden defects that have not been targeted by black-box test techniques, which are test design techniques that use the external behavior and specifications of the software system as the test basis, without considering its internal structure or implementation. Experience-based test techniques can complement black-box test techniques by covering aspects that are not explicitly specified, such as usability, security, reliability, performance, etc.
The other statements are false, because:
Experience-based test techniques do not rely on the experience of testers to identify the root causes of defects found by black-box test techniques, but rather to identify the potential sources of defects
based on their own insights, heuristics, or exploratory testing. The root causes of defects are usually identified by debugging or root cause analysis, which are activities that involve examining the code or the development process to find and fix the errors that led to the defects.
Some of the most common test basis used by white-box test techniques include the source code, the design documents, the architecture diagrams, and the control flow graphs of the software system. White-box test techniques are test design techniques that use the internal structure and implementation of the software system as the test basis, and aim to achieve a certain level of test coverage based on the code elements, such as statements, branches, paths, etc. User stories, use cases, and business processes are examples of test basis used by black-box test techniques, as they describe the functional and non-functional requirements of the software system from the perspective of the users or the stakeholders.
The primary goal of experience-based test techniques is not to design test cases that can be easily automated using a GUI-based test automation tool, but rather to design test cases that can reveal defects that are not easily detected by other test techniques, such as boundary value analysis, equivalence partitioning, state transition testing, etc. Test automation is the use of software tools to execute test cases and compare actual results with expected results, without human intervention. Test automation can be applied to different types of test techniques, depending on the test objectives, the test levels, the test tools, and the test resources. However, test automation is not always feasible or beneficial, especially for test cases that require human judgment, creativity, or exploration, such as those designed by experience-based test techniques.
Reference: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.1, Black-box Test Design Techniques
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.2, White-box Test Design Techniques
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.2.3, Experience-based Test Design Techniques
ISTQB® Glossary of Testing Terms v4.0, Experience-based Test Technique, Black-box Test Technique, White-box Test Technique, Test Basis, Test Coverage, Test Automation
A typical test objective is to:
- A . determine the most appropriate level of detail with which to design test cases
- B . verify the compliance of the test object with regulatory requirements
- C . plan test activities in accordance with the existing test policy and test strategy verify the correct creation and configuration of the test environment
B
Explanation:
In the ISTQB CTFL Syllabus, it is stated that a key objective of testing is to verify that the test object meets regulatory requirements. This is crucial as compliance with regulatory standards ensures that the software adheres to necessary laws, guidelines, and safety standards which are often mandatory in various industries such as healthcare, finance, and aviation. Ensuring regulatory compliance helps prevent legal issues and promotes user safety and trust.
