Wednesday 30 March 2016

UGC-NET COMPUTER SCIENCE DECEMBER 2004 Answer Key with Explanation

Q::46.       Data Mining can be used as ................. Tool.



(A) Software     (B) Hardware
(C) Research    (D) Process
Answer: C
Explanation:
Data mining is an interdisciplinary subfield of computer science. It is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.

The term is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence, machine learning, and business intelligence.

The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.

The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.

Q::47 The processing speeds of pipeline segments are usually:
(A) Equal           (B) Unequal
(C) Greater        (D) None of these
Answer: B
Explanation:

Pipelining

In computing, a pipeline is a set of data processing elements connected in series, where the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.

Computer-related pipelines include:

Instruction pipelines, such as the classic RISC pipeline, which are used in central processing units (CPUs) to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time.
Graphics pipelines, found in most graphics processing units (GPUs), which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.).

Software pipelines, where commands can be written where the output of one operation is automatically fed to the next, following operation. The Unix system call pipe is a classic example of this concept, although other operating systems do support pipes as well.
Example: Pipelining is a natural concept in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be assembled at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.
Linear pipelines
A linear pipeline processor is a series of processing stages and memory access.
Non-linear pipelines
A non linear pipelining (also called dynamic pipeline) can be configured to perform various functions at different times. In a dynamic pipeline there is also feed forward or feedback connection. Non-linear pipeline also allows very long instruction word.

Q::48 The cost of a parallel processing is primarily determined by:
(A) Time complexity    
(B) Switching complexity
(C) Circuit complexity
(D) None of the above
Answer: B
Explanation:
In digital signal processing (DSP), parallel processing is a technique duplicating function units to operate different tasks (signals) simultaneously. Accordingly, we can perform the same processing for different signals on the corresponding duplicated function units. Further, due to the features of parallel processing, the parallel DSP design often contains multiple outputs, resulting in higher throughput than not parallel.
Consider a function unit (F0) and three tasks (T0T1 and T2). The required time for the function unit F0 to process those tasks is t0,t1 and t2 respectively. Then, if we operate these three tasks in a sequential order, the required time to complete them is t0 + t1 + t2.

Non-parallel.png
However, if we duplicate the function unit to another two copies (F), the aggregate time is reduced to max(t0,t1,t2), which is smaller than in a sequential order.



Parallel-tasks.png



Q::49 A data warehouse is always ....................
(A) Subject oriented    (B) Object oriented
(C) Program oriented (D) Compiler oriented
Answer: A
Explanation:
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data and are used for creating analytical reports for knowledge workers throughout the enterprise. Examples of reports could range from annual and quarterly comparisons and trends to detailed daily sales analysis.

The data stored in the warehouse is uploaded from the operational systems (such as marketing, sales, etc., shown in the figure to the right). The data may pass through an operational data store for additional operations before it is used in the DW for reporting.

50.    The term 'hacker' was originally associated with:
(A) A computer program
(B) Virus
(C) Computer professionals who solved complex computer problems.
(D) All of the above
Answer: C



Page 1 2 3 4 5 6 7 8 9 10



ugc net computer science question papers,ugc net computer science december 2014 question papers,
ugc net computer science june 2014 question papers,
ugc net computer science december 2014 question papers,
ugc net computer science june 2013 question papers,
ugc net computer science december 2013 question papers,
ugc net computer science june 2012 question papers,
ugc net computer science december 2012 question papers,
ugc net computer science june 2011 question papers,
ugc net computer science december 2011 question papers,
ugc net computer science june 2010 question papers,
ugc net computer science december 2010 question papers,
ugc net computer science june 2009 question papers,
ugc net computer science december 2009 question papers,
ugc net computer science june 2008 question papers,

ugc net computer science december 2008 question papers,

december 2004 ugc net computer science paper,ugc net computer science december 2004 solution, cbse net all solved papers, cbse net 

UGC-NET COMPUTER SCIENCE PAPER-2 DECEMBER 2004 Answer Key with Explanation

Q::41 The main objective of designing various modules of a software system is:
(A) To decrease the cohesion and to increase the coupling
(B) To increase the cohesion and to decrease the coupling
(C) To increase the coupling only
(D) To increase the cohesion only
Answer: B
Explanation:
In computer programming, cohesion refers to the degree to which the elements of a module belong together. Thus, cohesion measures the strength of relationship between pieces of functionality within a given module. For example, in highly cohesive systems functionality is strongly related.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Modules with high cohesion tend to be preferable because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability whereas low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand.
Cohesion is often contrasted with coupling, a different concept. High cohesion often correlates with loose coupling, and vice versa.
Cohesion is a qualitative measure, meaning that the source code to be measured is examined using a rubric to determine a classification. Cohesion types, from the worst to the best, are as follows:
Coincidental cohesion (worst)
Coincidental cohesion is when parts of a module are grouped arbitrarily; the only relationship between the parts is that they have been grouped together (e.g. a “Utilities” class).
Logical cohesion
Logical cohesion is when parts of a module are grouped because they are logically categorized to do the same thing even though they are different by nature (e.g. grouping all mouse and keyboard input handling routines).
Temporal cohesion
Temporal cohesion is when parts of a module are grouped by when they are processed - the parts are processed at a particular time in program execution (e.g. a function which is called after catching an exception which closes open files, creates an error log, and notifies the user).
Procedural cohesion
Procedural cohesion is when parts of a module are grouped because they always follow a certain sequence of execution (e.g. a function which checks file permissions and then opens the file).
Communicational/informational cohesion
Communicational cohesion is when parts of a module are grouped because they operate on the same data (e.g. a module which operates on the same record of information).
Sequential cohesion
Sequential cohesion is when parts of a module are grouped because the output from one part is the input to another part like an assembly line (e.g. a function which reads data from a file and processes the data).
Functional cohesion (best)
Functional cohesion is when parts of a module are grouped because they all contribute to a single well-defined task of the module (e.g. Lexical analysis of an XML string).

Coupling

In software engineering, coupling is the manner and degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules.
Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability.
Content coupling (high)
Content coupling (also known as Pathological coupling) occurs when one module modifies or relies on the internal workings of another module (e.g., accessing local data of another module).
Therefore changing the way the second module produces data (location, type, timing) will lead to changing the dependent module.
Common coupling
Common coupling (also known as Global coupling) occurs when two modules share the same global data (e.g., a global variable).
Changing the shared resource implies changing all the modules using it.
External coupling
External coupling occurs when two modules share an externally imposed data format, communication protocol, or device interface. This is basically related to the communication to external tools and devices.
Control coupling
Control coupling is one module controlling the flow of another, by passing it information on what to do (e.g., passing a what-to-do flag).
Stamp coupling (Data-structured coupling)
Stamp coupling occurs when modules share a composite data structure and use only a part of it, possibly a different part (e.g., passing a whole record to a function that only needs one field of it).
This may lead to changing the way a module reads a record because a field that the module does not need has been modified.
Data coupling
Data coupling occurs when modules share data through, for example, parameters. Each datum is an elementary piece, and these are the only data shared (e.g., passing an integer to a function that computes a square root).
Message coupling (low)
This is the loosest type of coupling. It can be achieved by state decentralization (as in objects) and component communication is done via parameters or message passing (see Message passing).
No coupling
Modules do not communicate at all with one another.
Q::42 Three essential components of a software project plan are:
(A) Team structure, Quality assurance plans, Cost estimation
(B) Cost estimation, Time estimation, Quality assurance plan
(C) Cost estimation, Time estimation, Personnel estimation
(D) Cost estimation, Personnel estimation, Team structure
Answer: B
Q::43 Reliability of software is dependent on:
(A) Number of errors present in software
(B) Documentation
(C) Testing suties
(D) Development Processes

Answer: A

Explanation:
Ability of a computer program to perform its intended functions and operations in a system's environment, without experiencing failure (system crash).Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. It differs from hardware reliability in that it reflects the design perfection, rather than manufacturing perfection. The high complexity of software is the major contributing factor of Software Reliability problems. Software Reliability is not a function of time - although researchers have come up with models relating the two.


Q::44 In transform analysis, input portion is called:
(A) Afferent branch                 (B) Efferent branch
(C) Central Transform             (D) None of the above

Answer: A

Explanation:

A structure chart is produced by the conversion of a DFD diagram; this conversion is described as ‘transform mapping (analysis)’. It is applied through the ‘transforming’ of input data flow into output data flow.
Transform analysis establishes the modules of the system, also known as the primary functional components, as well as the inputs and outputs of the identified modules in the DFD. Transform analysis is made up of a number of steps that need to be carried out. The first one is the dividing of the DFD into 3 parts:
Input
Logical processing
Output
The ‘input’ part of the DFD covers operations that change high level input data from physical to logical form e.g. from a keyboard input to storing the characters typed into a database. Each individual instance of an input s called an ‘afferent branch’.
The ‘output’ part of the DFD is similar to the ‘input’ part in that it acts as a conversion process. However, the conversion is concerned with the logical output of the system into a physical one e.g. text stored in a database converted into a printed version through a printer. Also, similar to the ‘input’, each individual instance of an output is called as ‘efferent branch’. The remaining part of a DFD is called the central transform.
Once the above step has been conducted, transform analysis moves onto the second step; the structure chart is established by identifying one module for the central transform, the afferent and efferent branches. These are controlled by a ‘root module’ which acts as an ‘invoking’ part of the DFD.
In order to establish the highest input and output conversions in the system, a ‘bubble’ is drawn out. In other words, the inputs are mapped out to their outputs until an output is found that cannot be traced back to its input. Central transforms can be classified as processes that manipulate the inputs/outputs of a system e.g. sorting input, prioritizing it or filtering data. Processes which check the inputs/outputs or attach additional information to them cannot be classified as central transforms. Inputs and outputs are represented as boxes in the first level structure chart and central transforms as single boxes.
Moving on to the third step of transform analysis, sub-functions (formed from the breaking up of high-level functional components, a process called ‘factoring’) are added to the structure chart. The factoring process adds sub-functions that deal with error-handling and sub-functions that determine the start and end of a process.
Transform analysis is a set of design steps that allows a DFD with transform flow characteristics to be mapped into specific architectural style. These steps are as follows:
Step1: Review the fundamental system model
Step2: Review and refine DFD for the SW
Step3: Assess the DFD in order to decide the usage of transform or transaction flow.
Step4: Identify incoming and outgoing boundaries in order to establish the transform center.
Step5: Perform "first-level factoring".
Step6: Perform "second-level factoring".
Step7: Refine the first-iteration architecture using design heuristics for improved SW quality.

Q::45 The Function Point (FP) metric is:
(A) Calculated from user requirements
(B) Calculated from Lines of code
(C) Calculated from software’s complexity assessment
(D) None of the above
Answer: C


Explanation:

Function point metric was proposed by Albrecht [1983]. This metric overcomes many of the shortcomings of the LOC metric. Since its inception in late 1970s, function point metric has been slowly gaining popularity. One of the important advantages of using the function point metric is that it can be used to easily estimate the size of a software product directly from the problem specification. This is in contrast to the LOC metric, where the size can be accurately determined only after the product has fully been developed. The conceptual idea behind the function point metric is that the size of a software product is directly dependent on the number of different functions or features it supports. A software product supporting many features would certainly be of larger size than a product with less number of features. Each function when invoked reads some input data and transforms it to the corresponding output data. For example, the issue book feature of a Library Automation Software takes the name of the book as input and displays its location and the number of copies available. Thus, a computation of the number of input and the output data values to a system gives some indication of the number of functions supported by the system. Albrecht postulated that in addition to the number of basic functions that a software performs, the size is also dependent on the number of files and the number of interfaces.







Besides using the number of input and output data values, function point metric computes the size of a software product (in units of functions points or FPs) using three other characteristics of the product as shown in the following expression. The size of a product in function points (FP) can be expressed as the weighted sum of these five problem characteristics. The weights associated with the five characteristics were proposed empirically and validated by the observations over many projects. Function point is computed in two steps. The first step is to compute the unadjusted function point (UFP).

UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of inquiries)*4 +

(Number of files)*10 + (Number of interfaces)*10

Number of inputs: Each data item input by the user is counted. Data inputs should be distinguished from user inquiries. Inquiries are user commands such as print-account-balance. Inquiries are counted separately. It must be noted that individual data items input by the user are not considered in the calculation of the number of inputs, but a group of related inputs are considered as a single input.
For example, while entering the data concerning an employee to an employee pay roll software; the data items name, age, sex, address, phone number, etc. are together considered as a single input. All these data items can be considered to be related, since they pertain to a single employee.

Number of outputs: The outputs considered refer to reports printed, screen outputs, error messages produced, etc. While outputting the number of outputs the individual data items within a report are not considered, but a set of related data items is counted as one input.

Number of inquiries: Number of inquiries is the number of distinct interactive queries which can be made by the users. These inquiries are the user commands which require specific action by the system.

Number of files: Each logical file is counted. A logical file means groups of logically related data. Thus, logical files can be data structures or physical files.

Number of interfaces: Here the interfaces considered are the interfaces used to exchange information with other systems. Examples of such interfaces are data files on tapes, disks, communication links with other systems etc.

Once the unadjusted function point (UFP) is computed, the technical complexity factor (TCF) is computed next. TCF refines the UFP measure by considering fourteen other factors such as high transaction rates, throughput, and response time requirements, etc. Each of these 14 factors is assigned from 0 (not present or no influence) to 6 (strong influence). The resulting numbers are summed, yielding the total degree of influence (DI). Now, TCF is computed as

(0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35.
Finally, FP=UFP*TCF.


Page 1 2 3 4 5 6 7 8 9 10

ugc net computer science question papers,ugc net computer science december 2014 question papers,
ugc net computer science june 2014 question papers,
ugc net computer science december 2014 question papers,
ugc net computer science june 2013 question papers,
ugc net computer science december 2013 question papers,
ugc net computer science june 2012 question papers,
ugc net computer science december 2012 question papers,
ugc net computer science june 2011 question papers,
ugc net computer science december 2011 question papers,
ugc net computer science june 2010 question papers,
ugc net computer science december 2010 question papers,
ugc net computer science june 2009 question papers,
ugc net computer science december 2009 question papers,
ugc net computer science june 2008 question papers,

ugc net computer science december 2008 question papers,

december 2004 ugc net computer science paper,ugc net computer science december 2004 solution, cbse net all solved papers, cbse net 

UGC-NET COMPUTER SCIENCE PAPER-2 DECEMBER 2004 Answer Key with Explanation

Q::36.Semaphores are used to:
(A) Synchronise critical resources to prevent deadlock
(B) Synchronise critical resources to prevent contention
(C) Do I/o
(D) Facilitate memory management
Answer: 
Explanation:
semaphore, in its most basic form, is a protected integer variable that can facilitate and restrict access to shared resources in a multi-processing environment. The two most common kinds of semaphores are counting semaphores and binary semaphores. Counting semaphores represent multiple resources, while binary semaphores, as the name implies, represents two possible states (generally 0 or 1; locked or unlocked). Semaphores were invented by the late Edsger Dijkstra.
Semaphores can be looked at as a representation of a limited number of resources, like seating capacity at a restaurant. If a restaurant has a capacity of 50 people and nobody is there, the semaphore would be initialized to 50. As each person arrives at the restaurant, they cause the seating capacity to decrease, so the semaphore in turn is decremented. When the maximum capacity is reached, the semaphore will be at zero, and nobody else will be able to enter the restaurant. Instead the hopeful restaurant goers must wait until someone is done with the resource, or in this analogy, done eating. When a patron leaves, the semaphore is incremented and the resource becomes available again.

A semaphore can only be accessed using the following operations: wait() and signal()wait() is called when a process wants access to a resource. This would be equivalent to the arriving customer trying to get an open table. If there is an open table, or the semaphore is greater than zero, then he can take that resource and sit at the table. If there is no open table and the semaphore is zero, that process must wait until it becomes available. signal() is called when a process is done using a resource, or when the patron is finished with his meal. 
Q::37 In which of the following storage replacement strategies, is a program placed in the largest available hole in the memory?
(A) Best fit         (B) First fit
(C) Worst fit       (D) Buddy
Answer: C
Q::38 Remote computing system involves the use of timesharing systems and:

(A) Real time processing        (B) Batch processing

(C) Multiprocessing                 (D) All of the above
Answer: B
Q::39 Non modifiable procedures are called
(A) Serially useable procedures      
(B) Concurrent procedures
(C) Reentrant procedures                 
(D) Topdown procedures
Answer: C
Explanation:
In computing, a computer program or subroutine is called reentrant if it can be interrupted in the middle of its execution and then safely called again ("re-entered") before its previous invocations complete execution. The interruption could be caused by an internal action such as a jump or call, or by an external action such as a hardware interrupt or signal. Once the reentered invocation completes, the previous invocations will resume correct execution.
This definition originates from single-threaded programming environments where the flow of control could be interrupted by a hardware interrupt and transferred to an interrupt service routine (ISR). Any subroutine used by the ISR that could potentially have been executing when the interrupt was triggered should be reentrant. Often, subroutines accessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing the file system and sometimes even from allocating memory.
A subroutine that is directly or indirectly recursive should be reentrant. This policy is partially enforced by structured programming languages. However a subroutine can fail to be reentrant if it relies on a global variable to remain unchanged but that variable is modified when the subroutine is recursively invoked.
Reentrant code may not hold any static (or global) non-constant data.



Reentrant functions can work with global data. For example, a reentrant interrupt service routine could grab a piece of hardware status to work with (e.g. serial port read buffer) which is not only global, but volatile. Still, typical use of static variables and global data is not advised, in the sense that only atomic read-modify-write instructions should be used in these variables (it should not be possible for an interrupt or signal to come during the execution of such an instruction).
Reentrant code may not modify its own code.
The operating system might allow a process to modify its code. There are various reasons for this (e.g., blitting graphics quickly) but this would cause a problem with reentrancy, since the code might not be the same next time.
It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a different physical machine code location where a copy of the original code is made, it will not affect other invocations even if it modifies itself during execution of that particular invocation (thread).
Reentrant code may not call non-reentrant computer programs or routines.
Multiple levels of 'user/object/process priority' and/or multiprocessing usually complicate the control of reentrant code. It is important to keep track of any access and or side effects that are done inside a routine designed to be reentrant.
Q::40 Match the following
(a) Disk scheduling                 (1) Round robin
(b) Batch processing               (2) Scan
(c) Time sharing                       (3) LIFO
(d) Interrupt processing          (4) FIFO
(A) a-3, b-4, c-2, d-1    
(B) a-4, b-3, c-2, d-1
(C) a-2, b-4, c-1, d-3   
(D) a-3, b-4, c-1, d-2
Answer: C



Page 1 2 3 4 5 6 7 8 9 10

ugc net computer science question papers,ugc net computer science december 2014 question papers,
ugc net computer science june 2014 question papers,
ugc net computer science december 2014 question papers,
ugc net computer science june 2013 question papers,
ugc net computer science december 2013 question papers,
ugc net computer science june 2012 question papers,
ugc net computer science december 2012 question papers,
ugc net computer science june 2011 question papers,
ugc net computer science december 2011 question papers,
ugc net computer science june 2010 question papers,
ugc net computer science december 2010 question papers,
ugc net computer science june 2009 question papers,
ugc net computer science december 2009 question papers,
ugc net computer science june 2008 question papers,

ugc net computer science december 2008 question papers,

december 2004 ugc net computer science paper,ugc net computer science december 2004 solution, cbse net all solved papers, cbse net