Thursday, June 26, 2008

software Testing/QA Interview Questions

Important Testing Interview Questions:

What is quality assurance?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

What is the difference between QA and testing?
Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented to 'detection'.
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Describe the difference between validation and verification

Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.
Validation is done during actual testing and it takes place after all the verifications are being done.



What are SDLC and STLC ? Explain its different phases.
SDLC
Requirement phase
Designing phase (HLD, DLD (Program spec))
Coding
Testing
Release
Maintenance

STLC
System Study
Test planning
Writing Test case or scripts
Review the test case
Executing test case
Bug tracking
Report the defect


What is the difference between Bug, Error and Defect?

Error : It is the Deviation from actual and the expected value.
Bug : It is found in the development environment before the product is shipped to the respective customer.

Defect : It is found in the product itself after it is shipped to the respective customer.

What is Error guessing and Error seeding ?
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.

What is Bug Life Cycle?
Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
New or Opened
Assigned
Fixed
Tested
Closed

Explain Peer Review in Software Testing.

It is an alternative form of Testing, where some colleagues were invited to examine your work products for defects and improvement opportunities.
Some Peer review approaches are,
Inspection – It is a more systematic and rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.
Ex : In Motorola’s Iridium project nearly 80% of the defects were detected through inspections where only 60% of the defects were detected through informal reviews.
Team Reviews – It is a planned and structured approach but less formal and less rigorous comparing to Inspections.
Walkthrough – It is an informal review because the work product’s author describes it to some colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a defined procedure, do not specify exit criteria, require no management reporting, and generate no metrics.
Pair Programming – In Pair Programming, two developers work together on the same program at a single workstation and continuously reviewing their work.
Peer Deskcheck – In Peer Deskcheck only one person besides the author examines the work product. It is an informal review, where the reviewer can use defect checklists and some analysis methods to increase the effectiveness.


What is Traceability Matrix ?
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, This document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects/Bug id. Using this document the person can track the Requirement based on the Defect id.

Boundary value testing
It is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range.
Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this, max value:10 pass; max-1: 9 pass;
max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;
Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values.

Equivalence testing
It is normally used to check the type of the object.
Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters.
In +ve condition we have test the object by giving alphabets. i.e a-z char only, after that we need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) i.e A-Z,0-9,blank etc, it will fail.

What is Defect Leakage ?
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

What is Bidirectional Traceability ?
Bidirectional Traceability must be implemented from both forward & backward(i.e from requirements to end products and from end products to requirements) When the the requirements are fully managed then traceability should be established between source requirements to its lower level requirements and from lower level requirements to source. This helps us to determine all the source requirements have been completely addressed.

What is Buffer Overflow?
A buffer overflow occurs when a program or process tries to store more data in a buffer (temporary data storage area) than it was intended to hold. Since buffers are created to contain a finite amount of data, the extra information - which has to go somewhere - can overflow into adjacent buffers, corrupting or overwriting the valid data held in them


What is a Memory Leak?
A memory leak occurs when a programmer dynamically allocates memory space for a variable of some type, but then fails to free up that space before the program completes. This leaves less free memory for the system to use. Repeated running of the program or function that causes such memory loss can eventually crash the system or result in a denial of service.


What is 'configuration management'?
Configuration management is a process to control and document any changes made during the life of a project. Revision control, Change Control, and Release Control are important aspects of Configuration Management.

For Web Applications what type of tests are you going to do?
Web-based applications present new challenges, these challenges include:
- Short release cycles;
- Constantly Changing Technology;
- Possible huge number of users during initial website launch;
- Inability to control the user's running environment;
- 24-hour availability of the web site.

The following are the techniques used for web application testing.
1. Functionality Testing
Functionality testing involves making Sure the features that most affect user interactions work properly. These include:
· forms
· searches
· pop-up windows
· shopping carts
· online payments

2. Usability Testing
Many users have low tolerance for anything that is difficult to use or that does not work. A user's first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general-use websites frustrated users can easily click over a competitor's site.

Usability testing involves following main steps
· identify the website's purpose;
· identify the indented users ;
· define tests and conduct the usability testing
· analyze the acquired information

3. Navigation Testing
Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing.

4. Forms Testing
Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer.

5. Page Content Testing
Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct.

6. Configuration and Compatibility testing
A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser/platform combination to ensure the web sites work properly under various environments.

7. Reliability and Availability Testing
A key requirement o a website is that it Be available whenever the user requests it, after 24-hours a day, every day. The number of users accessing web site simultaneously may also affect the site's availability.

8. Performance Testing
Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor's site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters.

9. Load Testing
The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server.

10. Stress Testing
Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days.

11. Security Testing
Security is a primary concern when communicating and conducting business- especially sensitive and business- critical transactions - over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.

what is the difference between delivarables and Release notes??
Release notes are provided for every new release of the build with fixed bugs. It contains what are all the bugs fixed and what are pending bugs.
Deliverables are provided at the end of the testing. Test plan, test cases, defect reports, documented defects which are not fixed etc come under deliverables.

what is a testcase and usecase ?
Use case is a description of a certain feature of an application in terms of actors ,actions and responses.
For example if the user enters valid userid and password and after clicking on login button the system should display home page.
Here the user is ACTOR. The operations performed by him are ACTIONS. The Systems display of home page is RESPONSE.

Actually using use cases we will write our test cases if any use cases are provided in the SRS.
A test case is a set of test inputs,executional conditions,with some expected results developed for validating a particular functionality of AUT.

How can you say that you have done good testing?
Good testing can be done.... but complete testing cannot be... Common factors in deciding good
testing are
1.Test cases completed with certain percentage passed.
2.Coverage of code/functionality/requirements reaches a specified point
3.) Bug rate falls below a certain level 4) Beta or alpha testing period ends


Difference between defect severity and defect priority?
Defect Severity : The damage(or)effect of the defect on the Particular application.
Ex:
Fatal-System crash,System Shutdown,System Restarting...etc
Major-Internal errors,one module completely blocked....etc
Minor-Validations,Functions Miss placement.....etc
Cosmetic-Spelling mistakes......etc

Change Request-requesting for any functionality & nbsp; &nb sp; changes,objects changes.....etc

Defect Priority :Levels of the Defect to be solved by the developer
High Priority-defect Should be solved right now.
Medium Priority-Not so important,but Developer should solve it at any cost of time.

When will we give severity priority for bugs in bug report?
While executing the application if we found any functionality is not working as per the requirements then we will specify the severity based on the affect of the defect on the application. But priority is given by the test manager or test lead or project manager .


What type of test scenario's will you write for a web testing project?
Verify the URL of Web application.
verify the Login and Password credential.
verify the validation on login page by using test data.
verify the static and dynamic text filed and also hyperlinks.
verify the alignment and overlap screen on web application.
verify the sanity testing for further process on system testing.


what is three way traceability matrix?
A three way traceability matrix is the document contains the requirements, test cases and defects if any.(mapping of all these).


What is test Metrics?

Test metrics consist of:
•Total test
•Test run
•Test passed
•Test failed
•Tests deferred
•Test passed the first time


Types of Testing

Black box testing - Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing - This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing - Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

Incremental integration Testing - This is continuous testing of an application when a new functionality is added the existing ones; it checks the application functionality by verifying whether it works separately before all parts of the program are completed, in this type it will be checked whether to introduce test drivers or not; this is done by programmers or by testers.

Integration testing - This testing is combining the ‘parts’ of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to the client/server and distributed systems.

Functional testing - This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

System testing - Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
OR
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System Testing is a more limiting type of testing; it seeks to detect defects both within the “inter-assemblages” and also within the system as a whole.

End-to-end testing - Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.
Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.

Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing - Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing - System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing - Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing - This testing is done to check the 'user-friendliness' of the application. This depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

Recovery testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - It is a process used to look out whether the security features of a system are implemented as designed and also whether they are adequate for a proposed application environment. This process involves functional testing, penetration testing and verification.

Installation testing - Installation testing is done to verify whether the hardware and software are installed and configured properly. This will ensure that all the system components were used during the testing process. This Installation testing will look out the testing for a high volume data, error messages as well as security testing.

Compatibility testing - Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Comparison testing - Comparison of product strengths and weaknesses with previous versions or other similar products.

Alpha testing - In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing - Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Static testing - Is a form of software testing where the software isn't actually used. This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

Dynamic testing (or dynamic analysis) - Is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time.

Negative testing - Testing the system using negative data is called negative testing, e.g. testing the password where it should be minimum of 8 characters so testing it using 6 characters is negative testing.

Ad hoc testing - Ad hoc testing is concern with the Application Testing without following any rules or test cases. For Ad hoc testing one should have strong knowledge about the Application.

Soak Testing - Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Describe bottom-up and top-down approaches in Regression Testing.
Bottom-up approach : In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.
Top-down approach : In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.


Explain Load, Performance, Stress Testing with an example
Load Testing and Performance Testing are commonly said as positive testing where as Stress Testing is said to be as negative testing.
Say for example there is a application which can handle 25 simultaneous user logins at a time. In load testing we will test the application for 25 users and check how application is working in this stage, in performance testing we will concentrate on the time taken to perform the operation. Where as in stress testing we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking the Hardware resources.

Will come soon with more concepts.........

Sunday, June 22, 2008

Imp Data Base Concepts

KEYS

A key is a set of attributes (usually just one) that will always uniquely identify

a record.

• A key consisting of more than one attribute is called a composite key.

• The primary key for a table is the key chosen to uniquely identify records

within the table. Primary key is defined on a single column It does not have null values.

• A foreign key is a field from a table that refers to (or targets) a specific key, usually the primary key, in another table.

A unique key is defined as having no two of its values the same. The columns of a unique key cannot contain null values. A table can have multiple unique keys or no unique keys at all.

The fields or combination of fields that are not used as primary key are known as candidate key or alternate key.


CONSTRAINTS

A constraint is a property assigned to a column or the set of columns in a table that prevents certain types of inconsistent data values from being placed in the column(s). Constraints are used to enforce the data integrity. This ensures the accuracy and reliability of the data in the database.

Primary Key constraint

This constraint is used to guarantee that a column or set of columns on a table contain unique values for every record in the given table. This lets you ensure data integrity by always being able to uniquely identify the record in the table. A table can have only one primary key constraint defined on it, and the rows in the primary key columns cannot contain null values. A primary key constraint can be defined when a table is created, or it can be added later.

Unique Constraints

Unique constraints may be placed on multiple columns. They constrain the UPDATE/INSERTS on the table so that the values being updated or inserted do not match any other row in the table for the corresponding values.

Foreign Key (FK) Constraints

A foreign key constraint allows certain fields in one table to refer to fields in another table. This type of constraint is useful if you have two tables, one of which has partial information, details on which can be sought from another table with a matching entry. A foreign key constraint in this case will prevent the deletion of an entry from the table with detailed information if there is an entry in the table with partial information that matches it.

Check Constraints

A check constraint prevents updates/inserts on the table by placing a check condition on the selected column. The UPDATE/INSERT is allowed only if the check condition qualifies.

Not Null Constraint

A Not Null constraint enforces that the column will not accept null values. It enforces valid entries for a given column by restricting the type, the format, or the range of possible values.



STORED PROCEDURE

A stored procedure is an already written SQL statement that is saved in the database. We can run the stored procedure from the database's command environment.

  1. Precompiled execution. SQL Server compiles each stored procedure once and then reutilizes the execution plan. This results in tremendous performance boosts when stored procedures are called repeatedly.

  2. Reduced client/server traffic. If network bandwidth is a concern in your environment, you'll be happy to learn that stored procedures can reduce long SQL queries to a single line that is transmitted over the wire.

  3. Efficient reuse of code and programming abstraction. Stored procedures can be used by multiple users and client programs. If you utilize them in a planned manner, you'll find the development cycle takes less time.

  4. Enhanced security controls. You can grant users permission to execute a stored procedure independently of underlying table permissions.



JOINS

SQL Joins are used to relate information in different tables. A Join condition is a part of the sql query that retrieves rows from two or more tables.

Self Join

A self-join is a query in which a table is joined (compared) to itself. Self-joins are used to compare values in a column with other values in the same column in the same table

Inner Join

An INNER JOIN is most often (but not always) created between the primary key column of one table and the foreign key column of another table.

The INNER JOIN will select all rows from both tables as long as there is a match between the columns.

Outer Joins

An outer join returns all rows from the joined tables whether or not there's a matching row between them

Left Outer Joins

When you use a left outer join to combine two tables, all the rows from the left-hand table are included in the results.

Right Outer Joins

A right outer join is conceptually the same as a left outer join except that all the rows from the right-hand table are included in the results.

Full outer join

A full outer join combines the results of both left and right outer joins. The joined table will contain all records from both tables, and fill in NULLs for missing matches on either side.


Cross Join

A cross join returns not the sum but the product of two tables. Each row in the left-hand table is matched up with each row in the right-hand table. It's the set of all possible row combinations, without any filtering


Delete & Truncate

DELETE is a logged operation on a per row basis. This means that the deletion of each row gets logged and physically deleted.

You can DELETE any row that will not violate a constraint, while leaving the foreign key or any other constraint in place.

TRUNCATE is also a logged operation, but in a different way. TRUNCATE logs the deallocation of the data pages in which the data exists. The deallocation of data pages means that your data rows still actually exist in the data pages, but the extents have been marked as empty for reuse. This is what makes TRUNCATE a faster operation to perform over DELETE.

You cannot TRUNCATE a table that has any foreign key constraints. You will have to remove the constraints, TRUNCATE the table, and reapply the constraints.

TRUNCATE will reset any identity columns to the default seed value. This means if you have a table with an identity column and you have 264 rows with a seed value of 1, your last record will have the value 264 (assuming you started with value 1) in its identity columns. After TRUNCATEing your table, when you insert a new record into the empty table, the identity column will have a value of 1. DELETE will not do this. In the same scenario, if your rows are deleted, when inserting a new row into the empty table, the identity column will have a value of 265.

Why Truncate is faster than Delete?

Reason:When you type DELETE all the data get copied into the Rollback Table space first, then delete operation is performed. So when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback Tables pace).All this process take time, but when we use TRUNCATE, it removes data directly without copying it into the Rollback Table space. So TRUNCATE is faster. Once you Truncate you can't get back the data.

What is COMMIT & ROLLBACK statement in SQL ?

Commit statement helps in termination of the current transaction and do all the changes that occur in transaction persistent and this also commits all the changes to the database. COMMIT we can also use in store procedure.

ROLLBACK do the same thing just terminate the current transaction but one another thing is that the changes made to database are ROLLBACK to the database.

What is Data Integrity and it's categories ?

Enforcing data integrity ensures the quality of the data in the database. For example, if an employee is entered with an employee_id value of 123, the database should not allow another employee to have an ID with the same value. If you have an employee_rating column intended to have values ranging from 1 to 5, the database should not accept a value of 6. If the table has a dept_id column that stores the department number for the employee, the database should allow only values that are valid for the department numbers in the company. Two important steps in planning tables are to identify valid values for a column and to decide how to enforce the integrity of the data in the column. Data integrity falls into these categories:
1) Entity integrity

2)Domain integrity

3) Referential integrity

4) User-defined integrity

Entity Integrity: Entity integrity defines a row as a unique entity for a particular table. Entity integrity enforces the integrity of the identifier column(s) or the primary key of a table (through indexes, UNIQUE constraints, PRIMARY KEY constraints, or IDENTITY properties).
Domain Integrity: Domain integrity is the validity of entries for a given column. You can enforce domain integrity by restricting the type (through data types), the format (through CHECK constraints and rules), or the range of possible values (through FOREIGN KEY constraints, CHECK constraints, DEFAULT efinitions, NOT NULL definitions, and rules).
Referential Integrity: Referential integrity preserves the defined relationships between tables when records are entered or deleted. In Microsoft® SQL Server™ 2000, referential integrity is based on relationships between foreign keys and primary keys or between foreign keys and unique keys (through FOREIGN KEY and CHECK constraints). Referential integrity ensures that key values are consistent across tables. Such consistency requires that there be no references to nonexistent values and that if a key value changes, all references to it change consistently throughout the database. When you enforce referential integrity, SQL Server prevents users from:
· Adding records to a related table if there is no associated record in the primary table.
· Changing values in a primary table that result in orphaned records in a related table.
· Deleting records from a primary table if there are matching related records.
For example, with the sales and titles tables in the pubs database, referential integrity is based on the relationship between the foreign key (title_id) in the sales table and the primary key (title_id) in the titles table.
User-Defined: Integrity User-defined integrity allows you to define specific business rules that do not fall into one of the other integrity categories. All of the integrity categories support user-defined integrity (all column- and table-level constraints in CREATE TABLE, stored procedures, and triggers).

What's the maximum size of a row?

8060 bytes.

What is a deadlock and what is a live lock? How will you go about resolving deadlocks?

Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock,
unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.

A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and
refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.

Backup Scopes

The scope of a backup defines the portion of the database covered by the backup. It identifies the database, file(s) and/or filegroup(s) that SQL Server will backup.

Full Backups include all data within the backup scope. For example, a full database backup will include all data in the database, regardless of when it was last created or modified. Similarly, a full partial backup will include the entire contents of every file and filegroup within the scope of that partial backup.

Differential Backups include only that portion of the data that has changed since the last full backup. For example, if you perform a full database backup on Monday morning and then perform a differential database backup on Monday evening, the differential backup will be a much smaller file (that takes much less time to create) that includes only the data changed during the day on Monday.



TRIGGER

A trigger is an object contained within an SQL Server database that is used to execute a batch of SQL code whenever a specific event occurs. As the name suggests, a trigger is “fired” whenever an INSERT, UPDATE, or DELETE SQL command is executed against a specific table.

Triggers are associated with a single table, and are automatically executed internally by SQL Server.

Triggers are a powerful tool that can be used to enforce the business rules automatically when the data is modified. Triggers can also be used to maintain the data integrity. But they are not to maintain data integrity. Triggers should be used to maintain the data integrity only if you are unable to enforce the data integrity using CONSTRAINTS, RULES and DEFAULTS. Triggers cannot be created on the temporary tables.

(OR)

A trigger is a database object that is attached to a table. In many aspects it is similar to a stored procedure. As a matter of fact, triggers are often referred to as a "special kind of stored procedure." The main difference between a trigger and a stored procedure is that the former is attached to a table and is only fired when an INSERT, UPDATE or DELETE occurs. You specify the modification action(s) that fire the trigger when it is created.

When to Use Triggers

There are many reasons to use triggers. If you have a table which keeps a log of messages, you may want to have a copy of them mailed to you if they are urgent. If there were no triggers you would have some solutions, though they are not as elegant. You could modify the application(s) logging the messages. This means that you might be redundantly coding the same thing in every application that logs messages.

Tables can have multiple triggers. The CREATE TRIGGER statement can be defined with the FOR UPDATE, FOR INSERT, or FOR DELETE clauses to target a trigger to a specific class of data modification actions. When FOR UPDATE is specified, the IF UPDATE (column_name) clause can be used to target a trigger to updates affecting a particular column.

SQL Server 2000 greatly enhances trigger functionality, extending the capabilities of the triggers you already know and love, and adding a whole new type of trigger, the "Instead Of" trigger.

SQL Server 2000 has many types of triggers:

  1. After Trigger

  2. Multiple After Triggers

  3. Instead Of Triggers

  4. Mixing Triggers Type



VIEW

A view is a pre-written query that is stored on the database. A view consists of a SELECT statement, and when you run the view

A view can be useful when there are multiple users with different levels of access, who all need to see portions of the data in the database (but not necessarily all of the data). Views can do the following:

  • Restrict access to specific rows in a table

  • Restrict access to specific columns in a table

  • Join columns from multiple tables and present them as though they are part of a single table

  • Present aggregate information (such as the results of the COUNT function)

(OR)

A SQL View is a virtual table, which is based on SQL SELECT query. Essentially a view is very close to a real database table (it has columns and rows just like a regular table), except for the fact that the real tables store data, while the views don’t. The view’s data is generated dynamically when the view is referenced. A view references one or more existing database tables or other views.

In effect every view is a filter of the table data referenced in it and this filter can restrict both the columns and the rows of the referenced tables.

What is normalization?

Normalization is the process of designing a data model to efficiently store data in a database. The end result is that redundant data is eliminated, and only data related to the attribute is stored within the table.

For example, let's say we store City, State and ZipCode data for Customers in the same table as Other Customer data. With this approach, we keep repeating the City, State and ZipCode data for all Customers in the same area. Instead of storing the same data again and again, we could normalize the data and create a related table called City. The "City" table could then store City, State and ZipCode along with IDs that relate back to the Customer table, and we can eliminate those three columns from the Customer table and add the new ID column.

Normalization rules have been broken down into several forms. People often refer to the third normal form (3NF) when talking about database design. This is what most database designers try to achieve: In the conceptual stages, data is segmented and normalized as much as possible, but for practical purposes those segments are changed during the evolution of the data model. Various normal forms may be introduced for different parts of the data model to handle the unique situations you may face.

Whether you have heard about normalization or not, your database most likely follows some of the rules, unless all of your data is stored in one giant table. We will take a look at the first three normal forms and the rules for determining the different forms here.

Rules for First Normal Form (1NF)

Eliminate repeating groups. This table contains repeating groups of data in the Software column.

Computer

Software

1

Word

2

Access, Word, Excel

3

Word, Excel

To follow the First Normal Form, we store one type of software for each record.

Computer

Software

1

Word

2

Access

2

Word

3

Excel

3

Word

3

Excel


Rules for second Normal Form (2NF)

Eliminate redundant data plus 1NF. This table contains the name of the software which is redundant data.

Computer

Software

1

Word

2

Access

2

Word

3

Excel

3

Word

3

Excel

To eliminate the redundant storage of data, we create two tables. The first table stores a reference SoftwareID to our new table that has a unique list of software titles.

Computer

SoftwareID

1

1

2

2

2

1

3

3

3

1

3

3


SoftwareID

Software

1

Word

2

Access

3

Excel

Rules for Third Normal Form (3NF)

Eliminate columns not dependent on key plus 1NF and 2NF. In this table, we have data that contains both data about the computer and the user.

Computer

User Name

User Hire Date

Purchased

1

Joe

4/1/2000

5/1/2003

2

Mike

9/5/2003

6/15/2004

To eliminate columns not dependent on the key, we would create the following tables. Now the data stored in the computer table is only related to the computer, and the data stored in the user table is only related to the user.

Computer

Purchased

1

5/1/2003

2

6/15/2004


User

User Name

User Hire Date

1

Joe

5/1/2003

2

Mike

6/15/2004


Computer

User

1

1

2

1

What does normalization have to do with SQL Server?

To be honest, the answer here is nothing. SQL Server, like any other RDBMS, couldn't care less whether your data model follows any of the normal forms. You could create one table and store all of your data in one table or you can create a lot of little, unrelated tables to store your data. SQL Server will support whatever you decide to do. The only limiting factor you might face is the maximum number of columns SQL Server supports for a table.

SQL Server does not force or enforce any rules that require you to create a database in any of the normal forms. You are able to mix and match any of the rules you need, but it is a good idea to try to normalize your database as much as possible when you are designing it. People tend to spend a lot of time up front creating a normalized data model, but as soon as new columns or tables need to be added, they forget about the initial effort that was devoted to creating a nice clean model.

To assist in the design of your data model, you can use the DaVinci tools that are part of SQL Server Enterprise Manager.

Advantages of normalization

    1. Smaller database: By eliminating duplicate data, you will be able to reduce the overall size of the database.
    2. Better performance:

    a. Narrow tables: Having more fine-tuned tables allows your tables to have less columns and allows you to fit more records per data page.
    b. Fewer indexes per table mean faster maintenance tasks such as index rebuilds.
    c. Only join tables that you need.

Disadvantages of normalization

    1. More tables to join: By spreading out your data into more tables, you increase the need to join tables.
    2. Tables contain codes instead of real data: Repeated data is stored as codes rather than meaningful data. Therefore, there is always a need to go to the lookup table for the value.
    3. Data model is difficult to query against: The data model is optimized for applications, not for ad hoc querying.