Practice Free CTFL_SYLL_4.0 Exam Online Questions
Which of the following statements refers to good testing practice to be applied regardless of the chosen software development model?
- A . Tests should be written in executable format before the code is written and should act as executable specifications that drive coding
- B . Test levels should be defined such that the exit criteria of one level are part of the entry criteria for the next level
- C . Test objectives should be the same for all test levels, although the number of tests designed at various levels can vary significantly
- D . Involvement of testers in work product reviews should occur as early as possible to take advantage of the early testing principle
D
Explanation:
The statement that refers to good testing practice to be applied regardless of the chosen software development model is option D, which says that involvement of testers in work product reviews should occur as early as possible to take advantage of the early testing principle. Work product reviews are static testing techniques, in which the work products of the software development process, such as the requirements, the design, the code, the test cases, etc., are examined by one or more reviewers, with or without the author, to identify defects, violations, or improvements. Involvement of testers in work product reviews can provide various benefits for the testing process, such as improving the test quality, the test efficiency, and the test communication. The early testing principle states that testing activities should start as early as possible in the software development lifecycle, and should be performed iteratively and continuously throughout the lifecycle. Applying the early testing principle can help to prevent, detect, and remove defects at an early stage, when they are easier, cheaper, and faster to fix, as well as to reduce the risk, the cost, and the time of the testing process. The other options are not good testing practices to be applied regardless of the chosen software development model, but rather specific testing practices that may or may not be applicable or beneficial for testing, depending on the context and the objectives of the testing activities, such as:
Tests should be written in executable format before the code is written and should act as executable specifications that drive coding: This is a specific testing practice that is associated with test-driven development, which is an approach to software development and testing, in which the developers write automated unit tests before writing the source code, and then refactor the code until the tests pass. Test-driven development can help to improve the quality, the design, and the maintainability of the code, as well as to provide fast feedback and guidance for the developers. However, test-driven development is not a good testing practice to be applied regardless of the chosen software development model, as it may not be feasible, suitable, or effective for testing in some contexts or situations, such as when the requirements are unclear, unstable, or complex, when the test automation tools or skills are not available or adequate, when the testing objectives or levels are not aligned with the unit testing, etc.
Test levels should be defined such that the exit criteria of one level are part of the entry criteria for the next level: This is a specific testing practice that is associated with sequential software development models, such as the waterfall model, the V-model, or the W-model, in which the software development and testing activities are performed in a linear and sequential order, with well-defined phases, deliverables, and dependencies. Test levels are the stages of testing that correspond to the levels of integration of the software system, suchas component testing, integration testing, system testing, and acceptance testing. Test levels should have clear and measurable entry criteria and exit criteria, which are the conditions that must be met before starting or finishing a test level. In sequential software development models, the exit criteria of one test level are usually part of the entry criteria for the next test level, to ensure that the software system is ready and stable for the next level of testing. However, this is not a good testing practice to be applied regardless of the chosen software development model, as it may not be relevant, flexible, or efficient for testing in some contexts or situations, such as when the software development and testing activities are performed in an iterative and incremental order, with frequent changes, feedback, and adaptations, as in agile software development models, such as Scrum, Kanban, or XP, when the test levels are not clearly defined or distinguished, or when the test levels are performed in parallel or concurrently, etc.
Test objectives should be the same for all test levels, although the number of tests designed at various levels can vary significantly: This is a specific testing practice that is associated with uniform software development models, such as the spiral model, the incremental model, or the prototyping model, in which the software development and testing activities are performed in a cyclical and repetitive manner, with similar phases, deliverables, and processes. Test objectives are the goals or the purposes of testing, which can vary depending on the test level, the test type, the test technique, the test environment, the test stakeholder, etc. Test objectives can be defined in terms of the test basis, the test coverage, the test quality, the test risk, the test cost, the test time, etc. Test objectives should be specific, measurable, achievable, relevant, and time-bound, and they should be aligned with the project objectives and the quality characteristics. In uniform software development models, the test objectives may be the same for all test levels, as the testing process is repeated for each cycle or iteration, with similar focus, scope, and perspective of testing. However, this is not a good testing practice to be applied regardless of the chosen software development model, as it may not be appropriate, realistic, or effective for testing in some contexts or situations, such as when the software development and testing activities are performed in a hierarchical and modular manner, with different phases, deliverables, and dependencies, as in sequential software development models, such as the waterfall model, the V-model, or the W-model, when the test objectives vary according to the test levels, such as component testing, integration testing, system testing, and acceptance testing, or when the test objectives change according to the feedback, the learning, or the adaptation of the testing process, as in agile software development models, such as Scrum, Kanban, or XP, etc.
Reference: ISTQB Certified Tester Foundation Level (CTFL) v4.0 sources and documents:
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.1.1, Testing and the Software Development Lifecycle1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.1, Testing Principles1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.2.2, Testing Policies, Strategies, and Test Approaches1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 1.3.1, Testing in Software Development Lifecycles1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.1, Test Planning1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.2, Test Monitoring and Control1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.3, Test Analysis and Design1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.4, Test Implementation1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.5, Test Execution1
ISTQB® Certified Tester Foundation Level Syllabus v4.0, Chapter 2.1.6, Test Closure1
ISTQB® Glossary of Testing Terms v4.0, Work Product Review, Static Testing, Early Testing, Test-driven Development, Test Level, Entry Criterion, Exit Criterion, Test Objective, Test Basis, Test Coverage, Test Quality, Test Risk, Test Cost, Test Time2
Which ONE of the following activities TYPICALLY belongs to the planning phase of the review process?
- A . A separate defect report is created for each identified defect so that corrective actions can be tracked.
- B . Each reviewer conducts an individual review to identify anomalies, recommendations, and questions.
- C . The purpose and scope of the review are defined, as well as the work product to be reviewed and the exit criteria.
- D . The reviewers analyze and discuss the anomalies found during the review in a joint meeting.
C
Explanation:
The planning phase of the review process(C) includes defining the review’s purpose, scope, and exit criteria to ensure alignment.
Option A is part of the defect management phase, B happens during individual preparation, and D takes place in the review meeting.
Reference: ISTQB CTFL v4.0 Syllabus, Section 3.2.2 C Review Process
You are testing a room upgrade system for a hotel. The system accepts three differed types of room (increasing order of luxury): Platinum. Silver and Gold Luxury. ONLY a Preferred Guest Card holder s eligible for an upgrade.
Below you can find the decision table defining the upgrade eligibility:
What is the expected result for each of the following test cases?
Customer A: Preference Guest Card holder, holding a Silver room
Customer B: Non Preferred Guest Card holder, holding a Platinum room
- A . Customer A; doesn’t offer any upgrade; Customer B: offers upgrade to Gold luxury room
- B . Customer A: doesn’t offer any upgrade; Customer B: doesn’t offer any upgrade.
- C . Customer A: offers upgrade to Gold Luxury room; Customer B: doesn’t offer any upgrade
- D . Customer A: offers upgrade to Silver room; Customer B: offers upgrade to Silver room.
C
Explanation:
According to the decision table in the image, a Preferred Guest Card holder with a Silver room is eligible for an upgrade to Gold Luxury (YES), while a non-Preferred Guest Card holder, regardless of room type, is not eligible for any upgrade (NO). Therefore, Customer A (a Preferred Guest Card holder with a Silver room) would be offered an upgrade to Gold Luxury, and Customer B (a non-Preferred Guest Card holder with a Platinum room) would not be offered any upgrade. Reference = The answer is derived directly from the decision table provided in the image; specific ISTQB Certified Tester Foundation Level (CTFL) v4.0 documents are not referenced.
A Test Manager conducts risk assessment for a project. One of the identified risks is: The sub-contractor may fail to meet his commitment". If this risk materializes. it will lead to delay in completion of testing required for the current cycle.
Which of the following sentences correctly describes the risk?
- A . It is a product risk since any risk associated with development timeline is a product risk.
- B . It is no longer a risk for the Test Manager since an independent party (the sub-contractor) is now managing it
- C . It is a object risk since successful completion of the object depends on successful and timely completion of the tests
- D . It is a product risk since default on part of the sub-contractor may lead to delay in release of the product
D
Explanation:
A product risk is a risk that affects the quality or timeliness of the software product being developed or tested1. Product risks are related to the requirements, design, implementation, verification, and maintenance of the software product2.
The risk of the sub-contractor failing to meet his commitment is a product risk, as it could cause a delay in the completion of the testing required for the current cycle, which in turn could affect the release date of the product. The release date is an important aspect of the product quality, as it reflects the customer satisfaction and the market competitiveness of the product3.
The other options are not correct because:
Which of the following statements about static testing and dynamic testing is true?
- A . Static testing is better suited than dynamic testing for highlighting issues that could indicate inappropriate code modularization
- B . Dynamic testing can only be applied to executable work products, while static testing can only be applied to non-executable work products
- C . Both dynamic testing and static testing cause failures, but failures caused by static testing are usually easier and cheaper to analyze
- D . Security vulnerabilities can only be detected when the software is being executed, and thus they can only be detected through dynamic testing, not through static testing
B
Explanation:
Dynamic testing requires the execution of the software to evaluate its behavior and performance. In contrast, static testing involves examining the software’s code, design, and documentation without executing the software. This makes static testing applicable to non-executable work products such as requirement documents, design documents, and source code.
Which ONE of the following elements is TYPICALLYNOT part of a test progress report?
- A . Obstacles and their workarounds
- B . A detailed assessment of product quality
- C . Test metrics to show the current status of the test process
- D . New or changed risks
B
Explanation:
A test progress report provides an overview of testing activities, metrics, and identified risks. It focuses on tracking testing progress rather than evaluating overall product quality (B), which is typically included in a test summary report after testing is completed.
(A) is correct because obstacles (challenges) are reported to ensure test execution stays on track.
(C) is correct as test metrics help stakeholders track execution progress.
(D) is correct because new or changed risks impact test focus and priorities.
A test progress report tracks execution and informs stakeholders about ongoing testing activities.
Reference: ISTQB CTFL v4.0 Syllabus, Section 5.3 C Test Monitoring and Control
Which of the following is not an example of a typical content of a test completion report for a test project?
- A . The additional effort spent on test execution compared to what was planned
- B . The unexpected test environment downtime that resulted in slower test execution
- C . The residual risk level if a risk-based test approach was adopted
- D . The test procedures of all test cases that have been executed
D
Explanation:
This answer is correct because the test procedures of all test cases that have been executed are not a typical content of a test completion report for a test project. A test completion report is a document that summarizes the test activities and results at the end of a test project. It usually includes information such as the test objectives, scope, approach, resources, schedule, results, deviations, issues, risks, lessons learned, and recommendations for improvement. The test procedures of all test cases that have been executed are part of the test documentation, but they are not relevant for the test completion report, as they do not provide a high-level overview of the test project outcomes and performance.
Reference: ISTQB Foundation Level Syllabus v4.0, Section 2.5.3.2
What type of testing measures its effectiveness by tracking which lines of code were executed by the tests?
- A . Acceptance testing
- B . Structural testing
- C . Integration testing
- D . Exploratory testing
B
Explanation:
Structural testing is a type of testing that measures its effectiveness by tracking which lines of code were executed by the tests. Structural testing, also known as white-box testing or glass-box testing, is based on the internal structure, design, or implementation of the software. Structural testing aims to verify that the software meets the specified quality attributes, such as performance, security, reliability, or maintainability, by exercising the code paths, branches, statements, conditions, or data flows. Structural testing uses various coverage metrics, such as function coverage, line coverage, branch coverage, or statement coverage, to determine how much of the code has been tested and to identify any untested or unreachable parts of the code. Structural testing can be applied at any level of testing, such as unit testing, integration testing, system testing, or acceptance testing, but it is more commonly used at lower levels, where the testers have access to the source code.
The other options are not correct because they are not types of testing that measure their effectiveness by tracking which lines of code were executed by the tests. Acceptance testing is a type of testing that verifies that the software meets the acceptance criteria and the user requirements.
Acceptance testing is usually performed by the end-users or customers, who may not have access to the source code or the technical details of the software. Acceptance testing is more concerned with the functionality, usability, or suitability of the software, rather than its internal structure or implementation. Integration testing is a type of testing that verifies that the software components or subsystems work together as expected. Integration testing is usually performed by the developers or testers, who may use both structural and functional testing techniques to check the interfaces, interactions, or dependencies between the components or subsystems. Integration testing is more concerned with the integration logic, data flow, or communication of the software, rather than its individual lines of code. Exploratory testing is a type of testing that involves simultaneous learning, test design, and test execution. Exploratory testing is usually performed by the testers, who use their creativity, intuition, or experience to explore the software and discover any defects, risks, or opportunities for improvement. Exploratory testing is more concerned with the behavior, quality, or value of the software, rather than its internal structure or implementation. Reference = ISTQB Certified Tester Foundation Level (CTFL) v4.0 syllabus, Chapter 4: Test Techniques, Section 4.3: Structural Testing Techniques, Pages 51-54; Chapter 1: Fundamentals of Testing, Section 1.4: Testing Throughout the Software Development Lifecycle, Pages 11-13; Chapter 3: Static Testing, Section 3.4: Exploratory Testing, Pages 40-41.
Consider the following table, which contains information about test cases from the test management system:

Which ONE of the following options organizes the test cases based on the statement coverage strategy, while considering practical constraints?
- A . {TC 20; TC 30; TC 10; TC 40; TC 50; TC 70; TC 60; TC 80; TC 90}
- B . {TC 10; TC 30; TC 20; TC 60; TC 40; TC 80; TC 90; TC 50; TC 70}
- C . {TC 80; TC 70; TC 50; TC 60; TC 20; TC 30; TC 10; TC 40; TC 90}
- D . {TC 60; TC 80; TC 40; TC 90; TC 50; TC 10; TC 70; TC 30; TC 20}
D
Explanation:
Statement coverage strategyprioritizestest cases with higher statement coverage first, while resolving dependencies before execution.
TC60 (7%) has the highest coveragebutdepends on REQ 1, so it should be executed after its dependency is covered.
TC80 (6%) depends on REQ 2, so it should be prioritized after TC40 (REQ 2).
TC40 (5%) andTC90 (5%) should be executed early.
Lower coverage test cases (TC10, TC70, TC30, TC20) should come last.
Thus, the correct order is {TC 60; TC 80; TC 40; TC 90; TC 50; TC 10; TC 70; TC 30; TC 20} (D).
Reference: ISTQB CTFL v4.0 Syllabus, Section 4.3 C White-Box Testing Techniques
Which of the following statements is true?
- A . Functional testing focuses on what the system should do while non-functional testing on the internal structure of the system
- B . Non-functional testing includes testing of both technical and non-technical quality characteristics
- C . Testers who perform functional tests are generally expected to have more technical skills than testers who perform non-functional tests
- D . The test techniques that can be used to design white-box tests are described in the ISO/IEC 25010 standard
B
Explanation:
Non-functional testing includes testing of both technical and non-technical quality characteristics. Non-functional testing is the process of testing the quality attributes of a system, such as performance, usability, security, reliability, etc. Non-functional testing can be applied at any test level and can use both black-box and white-box test techniques. Non-functional testing can cover both technical aspects, such as response time, throughput, resource consumption, etc., and non-technical aspects, such as user satisfaction, accessibility, compliance, etc. Therefore, option B is the correct answer.
: ISTQB® Certified Tester Foundation Level Syllabus v4.01, Section 1.3.1, page 13; ISTQB® Glossary v4.02, page 40.
