Q::41 The main objective of designing various modules of a software system is:
(A) To decrease the cohesion and to increase the coupling
(B) To increase the cohesion and to decrease the coupling
(C) To increase the coupling only
(D) To increase the cohesion only
Answer: B
Explanation:
In computer programming, cohesion refers to the degree to which the elements of a module belong together. Thus, cohesion measures the strength of relationship between pieces of functionality within a given module. For example, in highly cohesive systems functionality is strongly related.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Modules with high cohesion tend to be preferable because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability whereas low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand.
Cohesion is often contrasted with coupling, a different concept. High cohesion often correlates with loose coupling, and vice versa.
Cohesion is a qualitative measure, meaning that the source code to be measured is examined using a rubric to determine a classification. Cohesion types, from the worst to the best, are as follows:
- Coincidental cohesion (worst)
- Coincidental cohesion is when parts of a module are grouped arbitrarily; the only relationship between the parts is that they have been grouped together (e.g. a “Utilities” class).
- Logical cohesion
- Logical cohesion is when parts of a module are grouped because they are logically categorized to do the same thing even though they are different by nature (e.g. grouping all mouse and keyboard input handling routines).
- Temporal cohesion
- Temporal cohesion is when parts of a module are grouped by when they are processed - the parts are processed at a particular time in program execution (e.g. a function which is called after catching an exception which closes open files, creates an error log, and notifies the user).
- Procedural cohesion
- Procedural cohesion is when parts of a module are grouped because they always follow a certain sequence of execution (e.g. a function which checks file permissions and then opens the file).
- Communicational/informational cohesion
- Communicational cohesion is when parts of a module are grouped because they operate on the same data (e.g. a module which operates on the same record of information).
- Sequential cohesion
- Sequential cohesion is when parts of a module are grouped because the output from one part is the input to another part like an assembly line (e.g. a function which reads data from a file and processes the data).
- Functional cohesion (best)
- Functional cohesion is when parts of a module are grouped because they all contribute to a single well-defined task of the module (e.g. Lexical analysis of an XML string).
Coupling
- In software engineering, coupling is the manner and degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules.Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability.
- Content coupling (high)
- Content coupling (also known as Pathological coupling) occurs when one module modifies or relies on the internal workings of another module (e.g., accessing local data of another module).
- Therefore changing the way the second module produces data (location, type, timing) will lead to changing the dependent module.
- Common coupling
- Common coupling (also known as Global coupling) occurs when two modules share the same global data (e.g., a global variable).
- Changing the shared resource implies changing all the modules using it.
- External coupling
- External coupling occurs when two modules share an externally imposed data format, communication protocol, or device interface. This is basically related to the communication to external tools and devices.
- Control coupling
- Control coupling is one module controlling the flow of another, by passing it information on what to do (e.g., passing a what-to-do flag).
- Stamp coupling (Data-structured coupling)
- Stamp coupling occurs when modules share a composite data structure and use only a part of it, possibly a different part (e.g., passing a whole record to a function that only needs one field of it).
- This may lead to changing the way a module reads a record because a field that the module does not need has been modified.
- Data coupling
- Data coupling occurs when modules share data through, for example, parameters. Each datum is an elementary piece, and these are the only data shared (e.g., passing an integer to a function that computes a square root).
- Message coupling (low)
- This is the loosest type of coupling. It can be achieved by state decentralization (as in objects) and component communication is done via parameters or message passing (see Message passing).
- No coupling
- Modules do not communicate at all with one another.
Q::42 Three essential components of a software project plan are:
(A) Team structure, Quality assurance plans, Cost estimation
(B) Cost estimation, Time estimation, Quality assurance plan
(C) Cost estimation, Time estimation, Personnel estimation
(D) Cost estimation, Personnel estimation, Team structure
Answer: B
Q::43 Reliability of software is dependent on:
(A) Number of errors present in software
(B) Documentation
(C) Testing suties
(D) Development Processes
Answer: A
Explanation:
Ability of a computer program to perform its intended functions and operations in a system's environment, without experiencing failure (system crash).Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. It differs from hardware reliability in that it reflects the design perfection, rather than manufacturing perfection. The high complexity of software is the major contributing factor of Software Reliability problems. Software Reliability is not a function of time - although researchers have come up with models relating the two.
Q::44 In transform analysis, input portion is called:
(A) Afferent branch (B) Efferent branch
(C) Central Transform (D) None of the above
Answer: A
Explanation:
A structure chart is produced by the conversion of a DFD diagram; this conversion is described as ‘transform mapping (analysis)’. It is applied through the ‘transforming’ of input data flow into output data flow.
Transform analysis establishes the modules of the system, also known as the primary functional components, as well as the inputs and outputs of the identified modules in the DFD. Transform analysis is made up of a number of steps that need to be carried out. The first one is the dividing of the DFD into 3 parts:
Input
Logical processing
Output
The ‘input’ part of the DFD covers operations that change high level input data from physical to logical form e.g. from a keyboard input to storing the characters typed into a database. Each individual instance of an input s called an ‘afferent branch’.
The ‘output’ part of the DFD is similar to the ‘input’ part in that it acts as a conversion process. However, the conversion is concerned with the logical output of the system into a physical one e.g. text stored in a database converted into a printed version through a printer. Also, similar to the ‘input’, each individual instance of an output is called as ‘efferent branch’. The remaining part of a DFD is called the central transform.
Once the above step has been conducted, transform analysis moves onto the second step; the structure chart is established by identifying one module for the central transform, the afferent and efferent branches. These are controlled by a ‘root module’ which acts as an ‘invoking’ part of the DFD.
In order to establish the highest input and output conversions in the system, a ‘bubble’ is drawn out. In other words, the inputs are mapped out to their outputs until an output is found that cannot be traced back to its input. Central transforms can be classified as processes that manipulate the inputs/outputs of a system e.g. sorting input, prioritizing it or filtering data. Processes which check the inputs/outputs or attach additional information to them cannot be classified as central transforms. Inputs and outputs are represented as boxes in the first level structure chart and central transforms as single boxes.
Moving on to the third step of transform analysis, sub-functions (formed from the breaking up of high-level functional components, a process called ‘factoring’) are added to the structure chart. The factoring process adds sub-functions that deal with error-handling and sub-functions that determine the start and end of a process.
Transform analysis is a set of design steps that allows a DFD with transform flow characteristics to be mapped into specific architectural style. These steps are as follows:
Step1: Review the fundamental system model
Step2: Review and refine DFD for the SW
Step3: Assess the DFD in order to decide the usage of transform or transaction flow.
Step4: Identify incoming and outgoing boundaries in order to establish the transform center.
Step5: Perform "first-level factoring".
Step6: Perform "second-level factoring".
Step7: Refine the first-iteration architecture using design heuristics for improved SW quality.Q::45 The Function Point (FP) metric is:
(A) Calculated from user requirements
(B) Calculated from Lines of code
(C) Calculated from software’s complexity assessment
(D) None of the above
Answer: C
Explanation:
Function point metric was proposed by Albrecht [1983]. This metric overcomes many of the shortcomings of the LOC metric. Since its inception in late 1970s, function point metric has been slowly gaining popularity. One of the important advantages of using the function point metric is that it can be used to easily estimate the size of a software product directly from the problem specification. This is in contrast to the LOC metric, where the size can be accurately determined only after the product has fully been developed. The conceptual idea behind the function point metric is that the size of a software product is directly dependent on the number of different functions or features it supports. A software product supporting many features would certainly be of larger size than a product with less number of features. Each function when invoked reads some input data and transforms it to the corresponding output data. For example, the issue book feature of a Library Automation Software takes the name of the book as input and displays its location and the number of copies available. Thus, a computation of the number of input and the output data values to a system gives some indication of the number of functions supported by the system. Albrecht postulated that in addition to the number of basic functions that a software performs, the size is also dependent on the number of files and the number of interfaces.
Besides using the number of input and output data values, function point metric computes the size of a software product (in units of functions points or FPs) using three other characteristics of the product as shown in the following expression. The size of a product in function points (FP) can be expressed as the weighted sum of these five problem characteristics. The weights associated with the five characteristics were proposed empirically and validated by the observations over many projects. Function point is computed in two steps. The first step is to compute the unadjusted function point (UFP).
UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of inquiries)*4 +
(Number of files)*10 + (Number of interfaces)*10
Number of inputs: Each data item input by the user is counted. Data inputs should be distinguished from user inquiries. Inquiries are user commands such as print-account-balance. Inquiries are counted separately. It must be noted that individual data items input by the user are not considered in the calculation of the number of inputs, but a group of related inputs are considered as a single input.
For example, while entering the data concerning an employee to an employee pay roll software; the data items name, age, sex, address, phone number, etc. are together considered as a single input. All these data items can be considered to be related, since they pertain to a single employee.
Number of outputs: The outputs considered refer to reports printed, screen outputs, error messages produced, etc. While outputting the number of outputs the individual data items within a report are not considered, but a set of related data items is counted as one input.
Number of inquiries: Number of inquiries is the number of distinct interactive queries which can be made by the users. These inquiries are the user commands which require specific action by the system.
Number of files: Each logical file is counted. A logical file means groups of logically related data. Thus, logical files can be data structures or physical files.
Number of interfaces: Here the interfaces considered are the interfaces used to exchange information with other systems. Examples of such interfaces are data files on tapes, disks, communication links with other systems etc.
Once the unadjusted function point (UFP) is computed, the technical complexity factor (TCF) is computed next. TCF refines the UFP measure by considering fourteen other factors such as high transaction rates, throughput, and response time requirements, etc. Each of these 14 factors is assigned from 0 (not present or no influence) to 6 (strong influence). The resulting numbers are summed, yielding the total degree of influence (DI). Now, TCF is computed as
(0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35.
Finally, FP=UFP*TCF.
ugc net computer science question papers,ugc net computer science december 2014 question papers,
ugc net computer science june 2014 question papers,
ugc net computer science december 2014 question papers,
ugc net computer science june 2013 question papers,
ugc net computer science december 2013 question papers,
ugc net computer science june 2012 question papers,
ugc net computer science december 2012 question papers,
ugc net computer science june 2011 question papers,
ugc net computer science december 2011 question papers,
ugc net computer science june 2010 question papers,
ugc net computer science december 2010 question papers,
ugc net computer science june 2009 question papers,
ugc net computer science december 2009 question papers,
ugc net computer science june 2008 question papers,
ugc net computer science december 2008 question papers,
december 2004 ugc net computer science paper,ugc net computer science december 2004 solution, cbse net all solved papers, cbse net
Very helpful Post!!! This is the first time I have read a post like this. Find Career tips here.
ReplyDeleteFunction Point Estimation Training in Chennai
It's very useful to my preparation.....
ReplyDeleteNice explanation
ReplyDelete