December 30, 2008

To Defragment All Indexes In A Database

Simple Way To Defragment All Indexes In A Database that Is Fragmented Above A Declared Threshold.

-- Specify your Database Name
USE AdventureWorks;
GO
-- Declare variables
SET NOCOUNT ON;
DECLARE @tablename VARCHAR(128);
DECLARE @execstr VARCHAR(255);
DECLARE @objectid INT;
DECLARE @indexid INT;
DECLARE @frag decimal;
DECLARE @maxfrag decimal;-- Decide on the maximum fragmentation to allow for.
SELECT @maxfrag = 30.0;
-- Declare a cursor.
DECLARE tables
CURSOR FOR SELECT CAST(TABLE_SCHEMA AS VARCHAR(100))
+'.'+CAST(TABLE_NAME AS VARCHAR(100)) AS Table_Name
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE';
-- Create the table.
CREATE TABLE #fraglist (
ObjectName CHAR(255),
ObjectId INT,
IndexName CHAR(255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity decimal,
BestCount INT,
ActualCount INT,
LogicalFrag decimal,
ExtentFrag decimal
);
-- Open the cursor.
OPEN tables;
-- Loop through all the tables in the database.
FETCH NEXT FROM tables INTO @tablename;
WHILE @@FETCH_STATUS = 0
BEGIN;
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS');
FETCH NEXT
FROM tables
INTO @tablename;
END;
-- Close and deallocate the cursor.
CLOSE tables;DEALLOCATE tables;-
- Declare the cursor for the list of indexes to be defragged.
DECLARE indexes CURSOR FOR
SELECT ObjectName, ObjectId, IndexId, LogicalFrag
FROM #fraglist
WHERE LogicalFrag >= @maxfrag
AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0;
-- Open the cursor.
OPEN indexes;
-- Loop through the indexes.
FETCH NEXT FROM indexes
INTO @tablename, @objectid, @indexid, @frag;
WHILE @@FETCH_STATUS = 0
BEGIN;
PRINT 'Executing DBCC INDEXDEFRAG (0, ' + RTRIM(@tablename) + ', ' + RTRIM(@indexid) + ') - fragmentation currently ' + RTRIM(CONVERT(VARCHAR(15),@frag)) + '%'; SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ', ' + RTRIM(@indexid) + ')'; EXEC (@execstr); FETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @frag;
END;
-- Close and deallocate the cursor.
CLOSE indexes;DEALLOCATE indexes;
-- Delete the temporary table.
DROP TABLE #fraglist;
GO

December 01, 2008

How to find the Index Creation /Rebuild Date in SQL Server

This query which uses STATS_DATE() function to get the STATISTICS updated date. This will not give you accurate result if you are updating STATISTICS explicitly. The logic behind the query is, if you rebuild indexes the STATISTICS are also being updated at the same time. So if you are not explicitly updating STATISTICS using UPDATE STATISTICS tableName command , then this query will give you the correct information
--In SQL Server 2000
Select Name as IndexName, STATS_DATE ( id , indid ) as IndexCreatedDate
From sysindexes
where id=object_id('HumanResources.Employee')
-- In SQL Server 2005
Select Name as IndexName, STATS_DATE ( object_id , index_id ) as IndexCreatedDate
From sys.indexes
where object_id=object_id('HumanResources.Employee')

November 17, 2008

Performance tuning : Beginner

The slow performance of stored procedures might have a lot of causes, like hardware, server memory, OS, network etc.
Primarily four parameters to monitor.
CPU : The amount of CPU time in milliseconds.
READS : A scalar counter, which shows number of logical reads performed by the server.
WRITES : Number of physical disk writes performed by the server.
DURATION : Amount of time taken to execute the batch or SP.
there are no specific rules to tune SQL statements. The ultimate goal is to reduce time and system resources.

Open the Management Studio and the stored procedure that needed to tune. Before running Profiler, execute the two commands below.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
These two commands clears the machine cache and cleaned the buffers. This allows to get the Profiler readings from scratch.

We should take readings from SET STATISTICS IO ON and SET STATISTICS TIME ON in Management Studio.

  • Performance tuning few things:
    i) We should be careful about creating temporary tables, dynamic SQL and joining. Those things are the gold mine for the tuners as eliminating them can make your procedure much faster.
    ii) Be very careful about the functionality of the procedure as your procedure should return the same results after tuning as before.
  • iii) It is not enough that SSMS is returning the "Command(s) completed successfully." message. We should check all the tables used in the procedures. Whether they have the same number of rows and whether the rows are contain the same data. These are the kinds of checks we should perform after tuning.
    iv) We always want to see the execution plan for each DML statement. The execution plan is the road layout, based on which the optimizer will decide which path will take less time and resources.
    v) Try and avoid table scans. If the query optimizer chooses a table scan, it will scan the whole table, which is far more time consuming than index scan. If possible, create indexes where table scan is used (though based on the size of the table, optimizer sometimes decide to go for a table scan though there are indexes present for the column, but in this case table should contains a small number of records).
    vi) Always try to write all the SQL statements in ANSI format. Generally, ANSI syntax not reduces the logical Reads, but it is more helpful to understand.

November 13, 2008

SQL Server tough Interview questions

1. What is the output of the following query? SELECT 1 UNION SELECT NULL UNION SELECT '1'

2. What do you expect the result of the following query to be? No cheating, don't run until you've answered!WITH DATA (Numbers) AS(SELECT NULL UNION ALL SELECT NULL UNION ALLSELECT NULL UNION ALLSELECT 1 UNION ALL SELECT 2 UNION SELECT 3)SELECT COUNT(ALL Numbers) AS NULLNumberCount FROM DATA WHERE Numbers IS NULL

3. You are trying to run some scripts using SQLCMD but keep getting errors that your password has expired. SSMS is not installed on this machine. What should you do?

4. What does sp_validname do?

5. You receive some data from UI and need to populate table Customer if the name received from UI does not exist in database. Which of options below should be used for this purpose:1.IF NOT EXISTS ( SELECT * FROM Customer WHERE Name = @CustomerName ) INSERT INTO [Customer] ( [Name]) VALUES ( @CustomerName)2.INSERT INTO [Customer] ( [Name])SELECT @CustomerNameWHERE NOT EXISTS ( SELECT * FROM Customer WHERE Name = @CustomerName )

6. What will be the count against each category? Each row in the result is shown as a pair in the answers.CREATE TABLE #CATEGORY(CATID INT ,VAL1 INT)INSERT INTO #CATEGORY VALUES(1,NULL)INSERT INTO #CATEGORY VALUES(2,1)INSERT INTO #CATEGORY VALUES(3,2)CREATE TABLE #DATA(VAL1 INT)INSERT INTO #DATA VALUES(1)INSERT INTO #DATA VALUES(1)INSERT INTO #DATA VALUES(1)INSERT INTO #DATA VALUES(2)INSERT INTO #DATA VALUES(2)INSERT INTO #DATA VALUES(3)SELECT C.CATID, COUNT(*) FROM #DATA D INNER JOIN #CATEGORY C ON C.VAL1 = D.VAL1 OR C.VAL1 IS NULLGROUP BY C.CATID

7. Cursor optimization we all know that cursors are slow on SQL Server and should be avoided when possible. However, there are a few cases where they can't be avoided, for instance because an external process has to be invoked for each row, or because you run into one of the very rare problems where a cursor actually runs faster than a set-based solution.So, suppose you find yourself in a situation where you do need a cursor - which cursor option must be specified in order to assure maximum performance, assuming the cursor fits in cache?

8. You are developing security policy for your SQL Servers and have all of the data entry clerks needing access to a series of tables. You create a Data Entry role and assign the proper permissions as well as add the users.
You then find out that Bob is a part of the HR group because of cross training and needs to use the same objects from the same application except for the Vacation table, to which he should not have access. No column permissions are assigned. What should you do?

9. What is the difference between a WHERE clause and a HAVING clause?

10. A developer is building an application against SQL Server 2000 and comes to see you regarding his design. This portion of the application is providing reporting on a series of clients that have been selected. His design is as follows:
Display a list of clients that can be selected
Loop through the list of clients and call a stored procedure to get data for each client (same data for each
Asemble this information and display a grid to the user To improve performance, what suggestions would you give the developer?

11. I want to append output data from a query into an existing file, which command prompt tool will help me?

12. If I create a database in simple recovery mode in SQL Server 2005, then backup the database, then add 1000 records, then change the recovery mode to full, then add 500 more. Can I recover using the log backups?

13. In SQL Server 2008, what is the maximum number of table sources that you can have in a single SELECT query?

14. What does the CURRENT_TIMESTAMP function return?

November 03, 2008

SQL Server Memory

Buffer Pool (BPool) Consists of several fragmented regions (up to 32) of address space used by SQL Server. This area is dynamic (the size can change) and is used by SQL Server for many internal processes, such as storing compiled and execution plans, creating indexes, and allocating for cursors.
MemToLeave Consists of a single contiguous region of addresses within the SQL Server process space that is set aside at startup. This area is static (the size cannot change) and is left unallocated by SQL Server for the use by components in the SQL Server process space, such as extended procedures, COM/OLE Automation objects, and linked servers. Memory allocations by SQL Server larger than 8 KB are also made from the MemToLeave area.
Reserved A region of address space that is set aside for future use of a process, and the OS has not used any physical RAM for this allocation.
Committed A region of address space that is currently being used by a process, and the OS has provided either physical RAM and/or paging file space for this allocation.
Allocation The act of giving the memory resource to the consumer.

SQL Server grows and shrinks its memory usage dynamically by committing and de-committing buffers from the buffers reserved at startup. The LazyWriter process is responsible for growing and shrinking the BPool. A committed bitmap array is maintained to track the commit or de-commit of buffers.
Growing the BPool :
When the LazyWriter wakes up, it checks if SQL Server needs more memory. If so, then the LazyWriter checks the committed bitmap to find a buffer with the bit set to off (0 or not committed) and locates the buffer. The buffer is then committed, and finally the bit in the committed bitmap is flipped to on (1 or committed).
Shrinking the BPool:
When the LazyWriter wakes up, it checks if the OS has sufficient memory available to service the memory requests from other applications. If not, then the LazyWriter looks for buffers that can be de-committed. If it finds such a buffer, it is de-committed and the 8 KB page is returned to SQL Server's reserved address space, which can be used in the OS.

DBCC MEMORYSTATUS
The DBCC MEMORYSTATUS command provides a snapshot of the current memory status of SQL Server. The output from this command is useful in troubleshooting issues that relate to the memory consumption of SQL Server or to specific out-of-memory errors

October 14, 2008

@@IDENTITY vs. SCOPE_IDENTITY

Both @@IDENTITY and SCOPE_IDENTITY() return the last identity value (primary key) that was entered by your active session, but in different scenarios they can each return different values. SCOPE_IDENTITY and @@IDENTITY return the last identity values that are generated in any table in the current session. However, SCOPE_IDENTITY returns values inserted only within the current scope; @@IDENTITY is not limited to a specific scope. Active session is the current activity you are engaging in. For example, if you can a stored procedure, that is what I am referring to as your active session. Each call to t a stored procedure (or user defined function, etc) is a session, unless the a stored procedure is nested in the stored procedure you are calling. In the case of a nested stored procedure or user defined method, while they are separate methods, they are part of the current session, but not part of the current scope. Your scope is limited to the method (stored procedure or user defined function) that you explicitly invoked. This is where the difference between @@IDENTITY and SCOPE_IDENTITY() comes in.
@@IDENTITY will return the last identity value entered into a table in your current session (this is limited to your session only, so you won't get identities entered by other users). While @@IDENTITY is limited to the current session, it is not limited to the current scope. In other words, if you have a trigger on a table that causes an identity to be created in another table, you will get the identity that was created last, even if it was the trigger that created it. Now this isn't bad, as long as you ensure that things are done in the correct order. Where this can get ugly is when there is an application revision and a new trigger gets added that gets fired from your stored procedure. Your code didn't anticipate this new trigger, so you could now be getting an incorrect value back.
SCOPE_IDENTITY(), like @@IDENTITY, will return the last identity value created in the current session, but it will also limit it to your current scope as well. In other words, it will return the last identity value that you explicitly created, rather than any identity that was created by a trigger or a user defined function.

For example, there are two tables, T1 and T2, and an INSERT trigger is defined on T1. When a row is inserted to T1, the trigger fires and inserts a row in T2. This scenario illustrates two scopes: the insert on T1, and the insert on T2 by the trigger.
Assuming that both T1 and T2 have identity columns, @@IDENTITY and SCOPE_IDENTITY will return different values at the end of an INSERT statement on T1. @@IDENTITY will return the last identity column value inserted across any scope in the current session. This is the value inserted in T2. SCOPE_IDENTITY() will return the IDENTITY value inserted in T1. This was the last insert that occurred in the same scope. The SCOPE_IDENTITY() function will return the null value if the function is invoked before any INSERT statements into an identity column occur in the scope.
@@IDENTITY returns the most recently created identity for your current connection, not necessarily the identity for the recently added row in a table. Always use SCOPE_IDENTITY() to return the identity of the recently added row.

October 13, 2008

Mutating Tables

OK, so you've just recieved the error:
ORA-04091: table XXXX is mutating, trigger/function may not see it

What is mutating table?
A mutating table is a table that is currently being modified by an update, delete, or insert statement. For example, if your trigger contains a select statement or an update statement referencing the table it is triggering off of you will receive the error. Another way this error can occur is if the trigger has statements to change the primary, foreign or unique key
columns of the table the trigger is triggering off of.
Mutatation of table is a state of table when in before /after update trigger table tries to update the same table.As the call of update also calls same trigger then the trigger will be called infinite times. then the table will be mutating .

When mutating table error occurs?
I have a table containing a key/status/effective date combination. When status changes, the values are propagated by trigger to a log table recording the status history. When no RI constraint is in place everything works fine.
When an RI trigger enforces a parent-child relationship, the status change logging trigger fails because the parent table is mutating. Propagating the values to the log table implicitly generates a lookup back to the parent table to ensure the RI constraint is satisfied.

Fix Oracle mutating trigger table errors:
Don't use triggers - The best way to avoid the mutating table error is not to use triggers. While the object-oriented Oracle provides "methods" that are associated with tables, most savvy PL/SQL developers avoid triggers unless absolutely necessary.
Use an "after" trigger - If you must use a trigger, it's best to avoid the mutating table error by using an "after" trigger, to avoid the currency issues associated with a mutating table. For example, using a trigger ":after update on xxx", the original update has completed and the table will not be mutating.
Re-work the trigger syntax – Find other ways to avoid mutating tables with a combination of row-level and statement-level triggers.
Use autonomous transactions - You can avoid the mutating table error by marking your trigger as an autonomous transaction, making it independent from the table that calls the procedure.

October 11, 2008

Retrieving SQL Server Database Properties with DATABASEPROPERTYEX

SQL Server 2000 and 2005 have a built-in function called DATABASEPROPERTYEX that allows you to return the specific information you are looking for, for one or all databases. This function can be called from a SELECT statement to return the results of one or more databases.
Syntax :
DATABASEPROPERTYEX ( database , property )
e.g.
Retrieving the status of the AUTO_SHRINK database option:
SELECT DATABASEPROPERTYEX('AdventureWorks', 'IsAutoShrink');
Retrieving the default collation for a database:
SELECT DATABASEPROPERTYEX('AdventureWorks', 'Collation');

We can use DATABASEPROPERTYEX for Collecting SQL Server database usage information for free space and trending as follows:

DBA need to monitor the following database/database file level metrics:
Server/Instance Name
Database Name
Database File Names (both logical and full physical path)
File Size (In Megabytes)
Database Status
Recovery Mode
Free Space (In Megabytes and Percent)

Script:

DECLARE @DBInfo TABLE ( ServerName VARCHAR(100), DatabaseName VARCHAR(100), FileSizeMB INT, LogicalFileName sysname, PhysicalFileName NVARCHAR(520), Status sysname, Updateability sysname, RecoveryMode sysname, FreeSpaceMB INT, FreeSpacePct VARCHAR(7), FreeSpacePages INT, PollDate datetime)

DECLARE @command VARCHAR(5000)
SELECT @command = 'Use [' + '?' + '] SELECT @@servername as ServerName, ' + '''' + '?' + '''' + ' AS DatabaseName, CAST(sysfiles.size/128.0 AS int) AS FileSize, sysfiles.name AS LogicalFileName, sysfiles.filename AS PhysicalFileName, CONVERT(sysname,DatabasePropertyEx(''?'',''Status'')) AS Status, CONVERT(sysname,DatabasePropertyEx(''?'',''Updateability'')) AS Updateability, CONVERT(sysname,DatabasePropertyEx(''?'',''Recovery'')) AS RecoveryMode, CAST(sysfiles.size/128.0 - CAST(FILEPROPERTY(sysfiles.name, ' + '''' + 'SpaceUsed' + '''' + ' ) AS int)/128.0 AS int) AS FreeSpaceMB, CAST(100 * (CAST (((sysfiles.size/128.0 -CAST(FILEPROPERTY(sysfiles.name, ' + '''' + 'SpaceUsed' + '''' + ' ) AS int)/128.0)/(sysfiles.size/128.0)) AS decimal(4,2))) AS varchar(8)) + ' + '''' + '%' + '''' + ' AS FreeSpacePct, GETDATE() as PollDate FROM dbo.sysfiles'

INSERT INTO @DBInfo (ServerName, DatabaseName, FileSizeMB, LogicalFileName, PhysicalFileName, Status, Updateability, RecoveryMode, FreeSpaceMB, FreeSpacePct, PollDate) EXEC sp_MSForEachDB @command SELECT ServerName, DatabaseName, FileSizeMB, LogicalFileName, PhysicalFileName, Status, Updateability, RecoveryMode, FreeSpaceMB, FreeSpacePct, PollDate FROM @DBInfo ORDER BY ServerName, DatabaseName

Remark:

CONVERT(sysname,DatabasePropertyEx(''?'',''Status'')) AS StatusCONVERT(sysname,DatabasePropertyEx(''?'',''Updateability'')) AS UpdateabilityCONVERT(sysname,DatabasePropertyEx(''?'',''Recovery'')) AS RecoveryMode

The DatabasePropertyEx() function returns the value for the database property or option specified. In this case we are interested in the Status, Updateability, and Recover database option values. I convert the values to the sysname datatype for compatibility with my metadata repository.

September 05, 2008

hi guys,

you can also subscribe my blog on omeego.com, where you can get my posts in your inbox and also add comments to interact with small community of people who follow my blog.

click on following link to subscribe to my omeego circle.

http://omeego.com/xfe6c10e09a51?circle=Avishkar_DB_blog&invitedby=avishkarm

August 26, 2008

SQL Injection

What is SQL Injection?
SQL Injection is the number one exploit used by hackers to steal information and deface websites. SQL Injection is based on malicious altering of SQL statements from their intended format, by exploiting weaknesses of a client application that is used to create them. Most commonly, this takes place in Internet or intranet scenarios, where users are expected to enter data via text boxes presented through a Web-page interface, which, in turn is used to populate a dynamically formed query. The most classic case of such attacks (although certainly not the only one) involves a design, in which access to a database is controlled by the username and password pair stored in one of its tables. In the case of a match, a user is automatically redirected to another Web page, from which other SQL statements can be run.
Using SQL Injection, attackers can:
1. Change or delete website content
2. Steal user information, such as email addresses,
username & password, credit card details
3. Access databases connected to the website

An SQL Injection attack means that hackers are injecting SQL commands into form fields or URL parameters, which may be executed by the database of unprotected web servers.

SQL Injection example :
The dynamic SQL will put system at risk for SQL injection attacks, in which an attacker compromises the system's security by executing unauthorized code.
The following example show how a user exploits the dynamic SQL in a stored procedure. The Web page in this example lets the user specify searches for users based on their name. The Web page executes the following stored procedure, SearchUser:
CREATE TABLE [dbo].[Users](
[UserId] [int] NULL,
[Name] [varchar](50) COLLATE Latin1_General_CI_AS NULL
) ON [PRIMARY]

CREATE PROCEDURE SearchUser
(@name varchar(100))
AS
BEGIN
DECLARE @SQL nvarchar(4000), @WhereClause nvarchar(500)
SET @SQL = N'SELECT * FROM dbo.Users'
SET @WhereClause = N' WHERE 1=1'
IF @name IS NOT NULL
SET @WhereClause = @WhereClause + ' AND name =
''' + @name + ''''
EXEC(@SQL + @WhereClause)
END
GO

The parameters @user let users conduct wildcard searches by using LIKE patterns.
Using the SearchUser stored procedure, webuser issues the following command to return the generated queries and a list of users from the users table, which demonstrates the security danger of forming dynamic SQL without specific checks:
exec SearchUser 'avi';SELECT * FROM Users;
By injecting SQL code into the search parameters, an attacker can potentially perform unauthorized actions depending on the permissions of the user account, the Web page, or application executing the stored procedure.

How to secure SQL Server databases from SQL injection:
There are a few ways to protect your database against these kinds of attacks. First we need to lock down the database security using database security best practices. This involves setting up the database security with the lowest set of permissions possible. It also includes not using any table-level access to the tables. All access to the tables should be done through stored procedures, and those stored procedures should not include any dynamic SQL.
Without securing the application layer against SQL injection attacks, all bets are off. The data, as it comes into the database, is basically impossible to validate within the database. It needs to be validated at the application layer.
If you don't validate the data within the v_Input variable, then you leave yourself open to SQL injection attacks. If you don't validate the input, it allows the attacker to pass in a single quote, and a semicolon, which tells the SQL Server to end the value and the statement moving on to the next statement in the batch. You should do some basic validation and replace any single quotes within our variable with two single quotes.


Measures to avoid SQL injection :
Validate all input coming from the user on the server.
Avoid the use of dynamic SQL queries if there an alternate method is available.
Use parameterized stored procedure with embedded parameters. it is recommended you use parameterized queriesand filter all user input for non-alphanumeric characters.
Execute stored procedures using a safe interface such as Callable statements in JDBC or CommandObject in ADO.
Use a low privileged account to run the database.
Give proper roles and privileges to the stored procedure being used in the applications.

The most systematic method to do so is to set coding standards that require this to be done. If the code is
already written, a code review should be done to detect any vulnerabilities. It is also recommended you
look at some of the automated tools available for detecting these types of problems.
Even if you feel you have closed all known vulnerabilities, it is still in your best interest to prevent these
specific attacks by disabling some of SQL Server’s functionality. This is not practical if you are actually
using the functionality. Fortunately, the functionality we are looking to disable is not used often.

August 19, 2008

Pivots – Converting Rows to Columns SQL Server 2005

Sometimes it is necessary to rotate results so that [the data in] columns are presented horizontally and [the data in] rows are presented vertically. This is known as creating a PivotTable, creating a cross-tab report, or rotating data.
Lets create table to use Pivot function.


CREATE TABLE TestPivot
( EmpID INT,
SalaryComponent INT,
SalaryAmount INT
)
go
INSERT INTO TestPivot
SELECT 1234, 1, 1000
UNION ALL
SELECT 1234, 2, 20
UNION ALL
SELECT 1234, 3, 4500
UNION ALL
SELECT 123, 1, 2000
UNION ALL
SELECT 123, 2, 30
UNION ALL
SELECT 123, 3, 4500
UNION ALL
SELECT 234, 1, 5000
UNION ALL
SELECT 234, 3, 4500
go

We need to show data like as follows:

EmpId Comp1 Comp2 Comp3 Comp4 Salary
123 2000 30 4500 0 6530
234 5000 0 4500 0 9500
1234 1000 20 4500 0 5520


This can be done using query:

SELECT EmpID,
SUM(CASE WHEN SalaryComponent = 1 THEN SalaryAmount ELSE 0 END) AS [Comp1],
SUM(CASE WHEN SalaryComponent = 2 THEN SalaryAmount ELSE 0 END) AS [Comp2],
SUM(CASE WHEN SalaryComponent = 3 THEN SalaryAmount ELSE 0 END) AS [Comp3],
SUM(CASE WHEN SalaryComponent = 4 THEN SalaryAmount ELSE 0 END) AS [Comp4],
SUM(SalaryAmount) AS Salary
FROM TestPivot
GROUP BY EmpID

let's see how to use PIVOT to do the same thing...

SELECT EmpID,
COALESCE([1],0) AS [Comp1],
COALESCE([2],0) AS [Comp2],
COALESCE([3],0) AS [Comp3],
COALESCE([4],0) AS [Comp4],
COALESCE([1],0)+COALESCE([2],0)+COALESCE([3],0)+COALESCE([4],0) as Salary
FROM
(
SELECT EmpID,SalaryComponent,SalaryAmount
FROM TestPivot
) as T
PIVOT (SUM(SalaryAmount) FOR SalaryComponent IN ([1],[2],[3],[4])) AS P
ORDER BY EmpID

PIVOT identifies the aggregate to be used, the column to pivot in the FOR clause, and the list of values that we want to pivot in the IN clause... in this case, the quarter number. Also notice that you must treat those as if they were column names. They must either be put in brackets or double quotes (if the quoted identifier setting is ON).
You must also bring EmpIDup as the row identifier in the pivot. Think of this as your "anchor" for the rows.

If you want a total for each row in the pivot, you can no longer use just an aggregate. Instead, you must add all the columns together.

Notice the NULL's where there are no values or where a NULL has been added into a total. Remember that anything plus a NULL is still a NULL. All of this occurs because the Pivot doesn't do any substitutions like the Case statements. To fix this little problem, we have to use COALESCE (or ISNULL) on the columns

August 07, 2008

Monitoring server disk space in SQL Server

DBAs must monitor their SQL Servers carefully to ensure that the system and user databases have sufficient disk space for the life time of the applications using these SQL Servers. Failure to do so can result in the following problems:
All SQL Server jobs fail – if the msdb database does not have room to grow, job information will be unable to be logged and all of your SQL Server jobs will be unable to start.
Sorts, aggregates and operations which make heavy use of tempdb may fail if tempdb data or log files can't grow to accommodate these operations.
The transaction log may fill up and all DML operations on your database may fail and point in time recovery is lost.
Your database files themselves may fill up and all DML operations on your database may fail.
A database experiencing out of control growth of its transaction log or data files may fill up the entire disk causing other databases applications to fail.

Steps for monitoring disk space in SQL Server :

Auto-growth settings for logs and files
The best practice is to size your data and log files to sizes that minimize the number of auto growths. For data files, the best practice is not to use the Growth in Percent, but rather the Growth in Megabytes, and select a value that will work for your database.

File growth is an intensive disk I/O operation and if SQL Server has to wait for a data or log file to expand, you could certainly see this harm performance and response times. The default growth increment is 1MB for data files and 10 percent for log files, which can be insufficient growth increments for busy systems. Additionally, relying on autogrow can lead to disk fragmentation because the data or log files are not contiguous on disk—this means response times may be longer than necessary, as data is physically scattered on the disk.
To size your transaction log files you should:
Shrink the log as much as possible.
Dump every 10-20 minutes.
Monitor its growth over a week - noting the maximum size.
Lastly, dump the log, shrink it one more time to the minimum size and then manually size it for this observed maximum size.
This prevents too many virtual log files, which can lead to performance problems.

Check server disk space :
Xp_fixeddrives will give the amount of free space available on local fixed drives.

Check database files space :

select db_name(dbid),name,drive=left(filename,1),filename,
filegroup=filegroup_name(groupid),
'size'=convert(nvarchar(15),convert(bigint,size)*8) +N'KB',
'maxsize'=(case maxsize when -1 then N'Unlimited'
else convert(nvarchar(15),convert(bigint,maxsize)*8) +N'KB'
end),
'usage'=(case status & 0x40 when 0x40 then 'log only' else 'data only' end)
from sysaltfiles
where dbid= 7
order by usage
Monitor using performance monitor :
You can only use performance monitor to monitor log auto growths – there is no counter for data file growths. Use Performance Monitor to monitor the performance objects SQL Server Databases: Log Growths, Percent log Used, and Data File Size.
Performance Monitor allows you to watch specific databases or all databases, and if necessary, raise alerts to send net messages or write events to the Windows NT application log, which monitoring software (like NetIQ) can then pick up and react to.

August 04, 2008

Blocking and Deadlocking

Blocking
Blocking is a necessary side effect of using locks to control concurrent resource access. A lock is either compatible or incompatible with other locks depending on the lock mode. Before a lock is granted, SQL Server first checks to see if an incompatible lock on the same resource exists. If not, the lock request is granted and execution proceeds. When the requested lock mode is incompatible with an existing lock mode on the same resource, the requesting session is blocked and waits until:
1) The blocking session releases the lock
2) The LOCK_TIMEOUT threshold of the blocked session is reached (indefinite by default)
3) The blocked client cancels the query
4) The blocked client times out (30-second default in ADO/ADO.NET, technically the same as #3)

Deadlocking
A deadlock is basically a special blocking scenario where 2 sessions are waiting on each other (directly or indirectly). Neither can proceed so both will wait indefinitely unless a timeout or intervention occurs. Unlike a normal blocking scenario, SQL Server will intervene when a deadlock situation is detected and cancel one of the transactions involved. The locks of the cancelled transaction are then released so the other blocked session can proceed. SQL Server chooses the transaction that is the least expensive to rollback as the deadlock victim by default. If set deadlock priority has been issued, SQL Server chooses the one with the lowest priority as the victim.
A deadlock always starts as a normal block with one session waiting while the other continues. It is only when the running session is later blocked by the waiting session that the deadlock occurs.

You can view session and lock info during this blocking episode by executing sp_who2 and sp_lock from a new SSMS window or you can use the Activity Monitor from the SSMS Object Explorer GUI.

How To Calculate the Number of Week Days Between two Dates

How To Calculate the Number of Week Days Between two Dates
declare @StartDay datetime
declare @EndDay datetime
set @StartDay = getdate()-30
set @EndDay = getdate()
select DateDiff(dd, @StartDay, @EndDay) - DateDiff(ww, @StartDay, @EndDay)*2
If you have a table of holidays, then you can simply subtract them out as well:
declare @StartDay datetime
declare @EndDay datetime
set @StartDay = getdate()-30
set @EndDay = getdate()
select DateDiff(dd, @StartDay, @EndDay) -
DateDiff(ww, @StartDay, @EndDay)*2 -
(select count(*) from holidays where holiday_date between @StartDay and @EndDay)

July 30, 2008

Script all indexes ( except PK ) of a table

Script all indexes ( except PK ) of a table :
SELECT OBJECT_NAME(SI.Object_ID) AS TABLE_NAME, SI.Name
FROM Sys.Indexes SI
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
ON SI.Name = TC.CONSTRAINT_NAME AND OBJECT_NAME(SI.Object_ID) = TC.TABLE_NAME
WHERE TC.CONSTRAINT_NAME IS NULL
AND OBJECTPROPERTY(SI.Object_ID, 'IsUserTable') = 1
AND OBJECT_NAME(SI.Object_ID)
in ('Table_Names')
ORDER BY OBJECT_NAME(SI.Object_ID), SI.Index_ID

July 17, 2008

Script all indexes as CREATE INDEX statements

SELECT
TOP 100 REPLICATE(' ',4000) AS COLNAMES , OBJECT_NAME(I.ID) AS TABLENAME,
I.ID AS TABLEID,
I.INDID AS INDEXID,
I.NAME AS INDEXNAME,
I.STATUS, INDEXPROPERTY (I.ID,I.NAME,'ISUNIQUE') AS ISUNIQUE,
INDEXPROPERTY (I.ID,I.NAME,'ISCLUSTERED') AS ISCLUSTERED,
INDEXPROPERTY (I.ID,
I.NAME,'INDEXFILLFACTOR') AS INDEXFILLFACTOR
INTO #TMP
FROM SYSINDEXES I
WHERE I.INDID > 0
AND I.INDID < 255
AND (I.STATUS & 64)=0

DECLARE
@ISQL VARCHAR(4000),
@TABLEID INT,
@INDEXID INT,
@MAXTABLELENGTH INT,
@MAXINDEXLENGTH INT

SELECT @MAXTABLELENGTH=MAX(LEN(TABLENAME))
FROM #TMP
SELECT @MAXINDEXLENGTH=MAX(LEN(INDEXNAME))
FROM #TMP

DECLARE C1 CURSOR
FOR SELECT TABLEID,INDEXID
FROM #TMP
OPEN C1
FETCH NEXT FROM C1 INTO @TABLEID,@INDEXID
WHILE @@FETCH_STATUS <> -1
BEGIN
SET @ISQL = '' SELECT @ISQL= @ISQL + ISNULL(SYSCOLUMNS.NAME,'') + ','
FROM SYSINDEXES I INNER
JOIN SYSINDEXKEYS ON I.ID=SYSINDEXKEYS.ID
AND I.INDID=SYSINDEXKEYS.INDID INNER JOIN SYSCOLUMNS ON SYSINDEXKEYS.ID=SYSCOLUMNS.ID
AND SYSINDEXKEYS.COLID=SYSCOLUMNS.COLID
WHERE I.INDID > 0 AND I.INDID < 255 AND (I.STATUS & 64)=0
AND I.ID= @TABLEID AND I.INDID= @INDEXID
ORDER BY SYSCOLUMNS.COLID

UPDATE #TMP
SET COLNAMES= @ISQL
WHERE TABLEID= @TABLEID
AND INDEXID= @INDEXID

FETCH NEXT FROM C1 INTO @TABLEID,@INDEXID
END
CLOSE C1
DEALLOCATE C1 --AT THIS POINT, THE 'COLNAMES' COLUMN HAS A TRAILING COMMA
UPDATE #TMP SET COLNAMES=LEFT(COLNAMES,LEN(COLNAMES) -1)

SELECT 'CREATE '
+ CASE
WHEN ISUNIQUE = 1
THEN ' UNIQUE '
ELSE ' '
END
+ CASE WHEN ISCLUSTERED = 1
THEN ' CLUSTERED '
ELSE ' '
END
+ ' INDEX [' + UPPER(INDEXNAME) + ']'
+ SPACE(@MAXINDEXLENGTH - LEN(INDEXNAME))
+' ON [' + UPPER(TABLENAME) + '] '
+ SPACE(@MAXTABLELENGTH - LEN(TABLENAME))
+ '(' + UPPER(COLNAMES) + ')'
+ CASE WHEN INDEXFILLFACTOR = 0 THEN '' ELSE '
WITH FILLFACTOR = ' + CONVERT(VARCHAR(10),INDEXFILLFACTOR)
END
--AS SQL FROM #TMP
--SELECT * FROM #TMP
DROP TABLE #TMP

July 11, 2008

System.Data.OracleClient requires Oracle client software version 8.1.7 or greater

Problem:
When usign System.Data.OracleClient with Oracle9i client, people get the following message while connecting to the Oracle database from and ASP.NET application.
System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.

Cause:
Security permissions were not properly set when Oracle9i/10g Release 2 client was installed on Windows with NTFS. The result of this is that content of the ORACLE_HOME directory is not visible to Authenticated Users on the machine; this again causes an error while the System.Data.OracleClient is communicating with the Oracle Connectivity software from an ASP.NET using Authenticated User privileges.

Solution:
To fix the problem you have to give the Authenticated "Users group privilege" to the Oracle Home directory.
1. Log on to Windwos as a user with Administrator privileges.
2. Start Window Explorer and navigate to the ORACLE_HOME folder.
3. Choose properties on the ORACLE_HOME folder.
4. Click the “Security” tab of the “Properties” window.
5. Click on “Authenticated Users” item in the “Name” list.
6. Uncheck the “Read and Execute” box in the “Permissions” list under the “Allow” column.
7. Re-check the “Read and Execute” box under the “Allow” column
8. Click the “Advanced” button and in the “Permission Entries” verify that “Authenticated Users” are listed with permission = “Read & Execute”, and Apply To = “This folder, subfolders and files”.
9. If not, edit that line and make sure that “Apply To” drop-down box is set to “This folder, subfolders and files”. This should already be set properly but it is important that you verify it.
10. Click the “Ok” button until you close out all of the security properties windows. The cursor may present the hour glass for a few seconds as it applies the permissions you just changed to all subfolders and files.
11. Reboot, to assure that the changes have taken effect.
12. Try your application again.

June 22, 2008

Portable DBA: Oracle

This post contains examples of a number of the most commonly performed DBA activities. They are all listed in alphabetical order by the command being used, and then the activity being performed.

One note of caution: if you don’t know what a specific keyword of a command does, don’t use
it without checking out its purpose. This is a reference for those who understand what something
like cascade constraints means when associated with a drop table command.

alter cluster
ALTER CLUSTER pub_cluster SIZE 4K;
ALTER CLUSTER pub_cluster DEALLOCATE UNUSED KEEP 1M;

alter database: Alter a Data File
ALTER DATABASE DATAFILE 4 OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf'
RESIZE 100m;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf'
AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE DATAFILE 4 END BACKUP;

alter database: Alter a Tempfile
ALTER DATABASE TEMPFILE 4 RESIZE 100M;
ALTER DATABASE TEMPFILE 4
AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE TEMPFILE 4 DROP INCLUDING DATAFILES;
ALTER DATABASE TEMPFILE 4 OFFLINE;

alter database: ARCHIVELOG Mode Commands
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE NOARCHIVELOG;
ALTER DATABASE FORCE LOGGING;
ALTER DATABASE CLEAR LOGFILE '/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE CLEAR UNARCHIVED LOGFILE
'/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE);
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
alter database: Control File Operations
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
AS '/opt/oracle/logfile_backup/backup_logfile.trc'
REUSE RESETLOGS;
ALTER DATABASE BACKUP CONTROLFILE TO
'/opt/oracle/logfile_backup/backup_logfile.ctl';

alter database: Create a Data File
ALTER DATABASE CREATE DATAFILE
'/opt/oracle/datafile/users01.dbf' AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE 4
AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE
'/opt/oracle/datafile/users01.dbf' AS NEW;

alter database: Datafile Offline/Online
See alter database: Alter a Data File
alter database: Logfile Commands
ALTER DATABASE ADD LOGFILE GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo')
SIZE 300M REUSE;
ALTER DATABASE ADD LOGFILE MEMBER
'/opt/oracle/logfiles/redo02c.rdo'
to GROUP 2;
ALTER DATABASE ADD LOGFILE thread 3 GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo')
SIZE 300M REUSE;
ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE MEMBER '/opt/oracle/logfiles/redo02b.rdo';
alter database: Mount and Open the Database
ALTER DATABASE MOUNT;
ALTER DATABASE OPEN;

alter database: Move or Rename a Database File or Online Redo Log
NOTE
The database must be mounted to rename or move online redo logs.
The database must be mounted or the data files taken offline to move
database data files.
ALTER DATABASE RENAME FILE '/ora/datafile/oldfile.dbf' TO '/ora/datafile/newfile.dbf';
alter database: Open the Database Read-Only
ALTER DATABASE OPEN READ ONLY;
alter database: Open the Database with resetlogs
ALTER DATABASE OPEN RESETLOGS;
alter database: Recover the Database

alter function: Recompile a Function
ALTER FUNCTION my_function COMPILE;

alter index: Allocate and Deallocate Extents
ALTER INDEX ix_my_tab ALLOCATE EXTENT;
ALTER INDEX ix_my_tab ALLOCATE EXTENT
DATAFILE '/ora/datafile/newidx.dbf';
ALTER INDEX ix_my_tab DEALLOCATE UNUSED;
ALTER INDEX ix_my_tab DEALLOCATE UNUSED KEEP 100M;
alter index: Miscellaneous Maintenance
ALTER INDEX ix_my_tab PARALLEL 3;
ALTER INDEX ix_my_tab NOPARALLEL;
ALTER INDEX ix_my_tab NOCOMPRESS;
ALTER INDEX ix_my_tab COMPRESS;
alter index: Modify Logging Attributes
ALTER INDEX ix_my_tab LOGGING;
ALTER INDEX ix_my_tab NOLOGGING;
alter index: Modify Storage and Physical Attributes
ALTER INDEX ix_my_tab PCTFREE 10 PCTUSED 40 INITRANS 5
STORAGE (NEXT 100k MAXEXTENTS UNLIMITED FREELISTS 10
BUFFER_POOL KEEP);

alter index: Partition – Add Hash Index Partition
ALTER INDEX ix_my_tab ADD PARTITION
TABLESPACE NEWIDXTBS;
alter index: Partition – Coalesce Partition
ALTER INDEX ix_my_tab COALESCE PARTITION;
alter index: Partition – Drop Partition
ALTER INDEX ix_my_tab DROP PARTITION ix_my_tab_jan_04;
alter index: Partition – Modify Default Attributes
ALTER INDEX ix_my_tab MODIFY DEFAULT ATTRIBUTES
FOR PARTITION ix_my_tab_jan_04
PCTFREE 10 PCTUSED 40 TABLESPACE newidxtbs
NOLOGGING COMPRESS;
alter index: Partition – Modify Partition
ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04
DEALLOCATE UNUSED KEEP 100M;
ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04
ALLOCATE EXTENT SIZE 100m;
ALTER INDEX ix_my_tab MODIFY PARTITION ix_my_tab_jan_04
PCTUSED 40 STORAGE(NEXT 50m) NOLOGGING;
alter index: Partition – Modify Subpartition
ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04
DEALLOCATE UNUSED KEEP 100M;
ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04
ALLOCATE EXTENT SIZE 100m;
ALTER INDEX ix_my_tab MODIFY SUBPARTITION ix_my_tab_jan_04
PCTUSED 40 STORAGE(NEXT 50m) NOLOGGING;
alter index: Partition – Rename
ALTER INDEX ix_my_tab RENAME
PARTITION ix_my_tab_jan_04 TO ix_my_tab_jan_05;
ALTER INDEX ix_my_tab RENAME
SUBPARTITION ix_my_tab_jan_04 TO ix_my_tab_jan_05;
alter index: Partition – Split
ALTER INDEX ix_my_tab SPLIT PARTITION ix_my_tab_jan_05
AT ('15-JAN-05') INTO PARTITION ix_my_tab_jan_05a
TABLESPACE myidxtbs
STORAGE (INITIAL 100m NEXT 50M FREELISTS 5);

alter index: Rebuild Nonpartitioned Indexes
ALTER INDEX ix_my_tab REBUILD ONLINE;
ALTER INDEX ix_my_tab REBUILD ONLINE
TABLESPACE idx_tbs_new PCTFREE 1
STORAGE (INITIAL 50M NEXT 50m FREELISTS 5)
COMPUTE STATISTICS PARALLEL 0;
alter index: Rebuild Partitions
ALTER INDEX ix_my_tab
REBUILD PARTITION ix_my_tab_jan_04 ONLINE;
ALTER INDEX ix_my_tab
REBUILD SUBPARTITION ix_my_tab_jan_04 ONLINE
PCTFREE 1 STORAGE (INITIAL 50M NEXT 50m FREELISTS 5)
COMPUTE STATISTICS PARALLEL 0;
alter index: Rename
ALTER INDEX ix_my_tab RENAME TO 'ix_my_tab_01';
alter index: Shrink
ALTER INDEX ix_my_tab SHRINK SPACE;
ALTER INDEX ix_my_tab SHRINK SPACE COMPACT CASCADE;

alter materialized view: Allocate and Deallocate Extents
ALTER MATERIALIZED VIEW mv_my_tab ALLOCATE EXTENT;
ALTER MATERIALIZED VIEW mv_my_tab DEALLOCATE UNUSED;
alter materialized view: Miscellaneous
ALTER MATERIALIZED VIEW mv_my_tab COMPRESS;
ALTER MATERIALIZED VIEW mv_my_tab PARALLEL 3;
ALTER MATERIALIZED VIEW mv_my_tab NOLOGGING;
ALTER MATERIALIZED VIEW mv_my_tab LOGGING;
ALTER MATERIALIZED VIEW mv_my_tab CONSIDER FRESH;
ALTER MATERIALIZED VIEW mv_my_tab ENABLE QUERY REWRITE;
alter materialized view: Physical Attributes and Storage
ALTER MATERIALIZED VIEW mv_my_tab
PCTFREE 5 PCTUSED 60
STORAGE (NEXT 100m FREELISTS 5);
alter materialized view: Refresh
ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST;
ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE;
ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST ON DEMAND;
ALTER MATERIALIZED VIEW mv_my_tab REFRESH FAST ON COMMIT;
ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE
START WITH sysdate;
ALTER MATERIALIZED VIEW mv_my_tab REFRESH COMPLETE
START WITH sysdate NEXT sysdate+1/24;
alter materialized view: Shrink Space
ALTER MATERIALIZED VIEW mv_my_tab SHRINK SPACE;
ALTER MATERIALIZED VIEW mv_my_tab
SHRINK SPACE COMPACT CASCADE;
alter materialized view log: Add Components
ALTER MATERIALIZED VIEW LOG ON my_tab ADD PRIMARY KEY;
ALTER MATERIALIZED VIEW LOG ON my_tab ADD (col1, col2)
INCLUDING NEW VALUES;
ALTER MATERIALIZED VIEW LOG ON my_tab ADD (col1, col2),
ROWID, SEQUENCE INCLUDING NEW VALUES;
alter materialized view log: Allocate and Deallocate Extents
ALTER MATERIALIZED VIEW LOG ON my_tab ALLOCATE EXTENT;
ALTER MATERIALIZED VIEW LOG ON my_tab DEALLOCATE UNUSED;
alter materialized view log: Miscellaneous
ALTER MATERIALIZED VIEW LOG ON my_tab PARALLEL 3;
ALTER MATERIALIZED VIEW LOG ON my_tab NOLOGGING;
ALTER MATERIALIZED VIEW LOG ON my_tab SHRINK SPACE;
alter materialized view log: Physical Attributes and Storage
ALTER MATERIALIZED VIEW LOG ON my_tab
PCTFREE 5 PCTUSED 60
STORAGE (NEXT 100m FREELISTS 5);

alter package: Compile
ALTER PACKAGE pk_my_package COMPILE;
ALTER PACKAGE pk_my_package COMPILE SPECIFICATION;
ALTER PACKAGE pk_my_package COMPILE BODY;

alter procedure: Compile
ALTER PROCEDURE pk_my_package COMPILE;

alter profile: Miscellaneous
ALTER ROLE my_role IDENTIFIED BY password;
ALTER ROLE my_role NOT IDENTIFIED;
alter profile: Modify Limits (Password)
ALTER PROFILE my_profile LIMIT FAILED_LOGIN_ATTEMPTS=3;
ALTER PROFILE my_profile LIMIT PASSWORD_LOCK_TIME=2/24;
ALTER PROFILE my_profile LIMIT PASSWORD_GRACE_TIME=5;
ALTER PROFILE my_profile LIMIT PASSWORD_LIFETIME=60;
ALTER PROFILE my_profile LIMIT PASSWORD_REUSE_TIME=365 PASSWORD_REUSE_MAX=3;
alter profile: Modify Limits (Resource)
ALTER PROFILE my_profile LIMIT SESSIONS_PER_CPU=10;
ALTER PROFILE my_profile LIMIT CONNECT_TIME=1000;
ALTER PROFILE my_profile LIMIT IDLE_TIME=60;
ALTER PROFILE my_profile LIMIT PRIVATE_SGA=1000000;

alter rollback segment: Online/Offline
ALTER ROLLBACK SEGMENT rbs01 OFFLINE;
ALTER ROLLBACK SEGMENT rbs01 ONLINE;
alter rollback segment: Shrink
ALTER ROLLBACK SEGMENT rbs01 SHRINK;
ALTER ROLLBACK SEGMENT rbs01 SHRINK TO 100M;
alter rollback segment: storage Clause
ALTER ROLLBACK SEGMENT rbs01 STORAGE(NEXT 50M OPTIMAL 100M);

alter sequence: Miscellaneous
ALTER SEQUENCE my_seq INCREMENT BY –5;
ALTER SEQUENCE my_seq INCREMENT BY 1 MAXVALUE 50000 CYCLE;
ALTER SEQUENCE my_seq NOMAXVALUE;
ALTER SEQUENCE my_seq CACHE ORDER;
ALTER SEQUENCE my_seq INCREMENT BY 1
MINVALUE 1 MAXVALUE 500 CYCLE;

alter session:
Enable and Disable Parallel Operations
ALTER SESSION ENABLE PARALLEL DML PARALLEL 3;
ALTER SESSION ENABLE PARALLEL DDL;
ALTER SESSION DISABLE PARALLEL QUERY;
alter session: Resumable Space Management
ALTER SESSION ENABLE RESUMABLE TIMEOUT 3600;
ALTER SESSION DISABLE RESUMABLE;
alter session: Set Session Parameters
ALTER SESSION SET nls_date_format='MM/DD/YYYY HH24:MI:SS';
ALTER SESSION SET sort_area_size=10000000;
ALTER SESSION SET query_rewrite_enabled=TRUE;
ALTER SESSION SET resumable_timeout=3600;
ALTER SESSION SET skip_unusable_indexes=TRUE;
ALTER SESSION SET SQL_TRACE=TRUE;
alter system:
Logfile and Archive Logfile Management
ALTER SYSTEM SWITCH LOGFILE;
ALTER SYSTEM ARCHIVE LOG START;
ALTER SYSTEM ARCHIVE LOG STOP;
ALTER SYSTEM ARCHIVE LOG ALL;
ALTER SYSTEM ARCHIVE LOG THREAD 1 ALL;
ALTER SYSTEM ARCHIVE LOG ALL TO 'C:\oracle\allarch';
alter system: Set System Parameters
ALTER SYSTEM SET db_cache_size=325M
COMMENT='This change is to add more memory to the system'
SCOPE=BOTH;
ALTER SYSTEM SET COMPATIBLE=10.0.0
COMMENT='GOING TO 10G!' SCOPE=SPFILE;
alter system: System Management
ALTER SYSTEM CHECKPOINT GLOBAL;
ALTER SYSTEM KILL SESSION '145,334';
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM DISABLE RESTRICTED SESSION;
ALTER SYSTEM SUSPEND;
ALTER SYSTEM QUIESCE RESTRICTED;
ALTER SYSTEM UNQUIESCE;
ALTER SYSTEM RESUME;
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;

alter table:
External Table Operations
ALTER TABLE ext_parts REJECT LIMIT 500;
ALTER TABLE ext_parts DEFUALT DIRECTORY ext_employee_dir;
ALTER TABLE ext_parts ACCESS PARAMETERS
(FIELDS TERMINATED BY ’,’);
ALTER TABLE ext_parts LOCATION (’PARTS01.TXT’,’PARTS02.TXT’);
ALTER TABLE ext_parts ADD COLUMN (SSN NUMBER);
alter table: Move Table
ALTER TABLE parts move TABLESPACE parts_new_tbs PCTFREE 10 PCTUSED 60;
alter table: Table Column – Add
ALTER TABLE PARTS ADD (part_location VARCHAR2(20) );
ALTER TABLE PARTS ADD (part_location VARCHAR2(20), part_bin VARCHAR2(30) );
ALTER TABLE parts ADD (photo BLOB)
LOB (photo) STORE AS lob_parts_photo
(TABLESPACE parts_lob_tbs);
alter table: Table Column – Modify
ALTER TABLE PARTS MODIFY (part_location VARCHAR2(30) );
ALTER TABLE PARTS MODIFY
part_location VARCHAR2(30), part_bin VARCHAR2(20) );
ALTER TABLE parts modify (name NOT NULL);
ALTER TABLE parts modify (name NULL);
ALTER TABLE parts MODIFY LOB (photo) (STORAGE(FREELISTS 2));
ALTER TABLE parts MODIFY LOB (photo) (PCTVERSION 50);
alter table: Table Column – Remove
ALTER TABLE parts DROP (part_location);
ALTER TABLE parts DROP (part_location, part_bin);
alter table: Table Column – Rename
ALTER TABLE parts RENAME COLUMN part_location TO part_loc;
alter table: Table Constraints – Add Check Constraint
ALTER TABLE parts ADD (CONSTRAINT ck_parts_01 CHECK (id > 0) );
alter table: Table Constraints – Add Default Value
ALTER TABLE PARTS MODIFY (name DEFAULT 'Not Available');
ALTER TABLE PARTS ADD (vendor_code NUMBER DEFAULT 0);
ALTER TABLE PARTS MODIFY (part_description DEFAULT NULL);
alter table: Table Constraints – Add Foreign Key
ALTER TABLE parts ADD CONSTRAINT fk_part_bin
FOREIGN KEY (bin_code) REFERENCES part_bin;
alter table: Table Constraints – Add Primary and Unique Key
ALTER TABLE parts ADD CONSTRAINT pk_parts_part_id
PRIMARY KEY (id) USING INDEX TABLESPACE parts_index
STORAGE (INITIAL 100K NEXT 100K PCTINCREASE 0);
Chapter 1: DBA Cheat Sheet 11
ALTER TABLE parts ADD CONSTRAINT uk_parts_part_bin
UNIQUE (part_bin)USING INDEX TABLESPACE parts_index
STORAGE (INITIAL 100K NEXT 100K PCTINCREASE 0);
alter table: Table Constraints – Modify
ALTER TABLE parts DISABLE UNIQUE (part_bin);
ALTER TABLE parts DISABLE CONSTRAINT uk_parts_part_bin;
ALTER TABLE parts DISABLE CONSTRAINT uk_parts_part_bin KEEP INDEX;
ALTER TABLE parts DISABLE CONSTRAINT fk_part_bin;
ALTER TABLE parts DISABLE CONSTRAINT fk_part_bin
DISABLE PRIMARY KEY KEEP INDEX;
ALTER TABLE parts ENABLE CONSTRAINT fk_part_bin;
ALTER TABLE parts ENABLE PRIMARY KEY;
ALTER TABLE parts ENABLE UNIQUE (part_bin);
ALTER TABLE parts ENABLE NOVALIDATE CONSTRAINT fk_part_bin;
ALTER TABLE parts ENABLE NOVALIDATE PRIMARY KEY;
ALTER TABLE parts ENABLE NOVALIDATE UNIQUE (part_bin);
ALTER TABLE parts ENABLE NOVALIDATE PRIMARY KEY
ENABLE NOVALIDATE CONSTRAINT fk_part_bin;
alter table: Table Constraints – Remove
ALTER TABLE parts DROP CONSTRAINT fk_part_bin;
ALTER TABLE parts DROP PRIMARY KEY;
ALTER TABLE parts DROP PRIMARY KEY CASCADE;
ALTER TABLE parts DROP UNIQUE (uk_parts_part_bin);
alter table: Table Partition – Add
ALTER TABLE store_sales ADD PARTITION sales_q1_04
VALUES LESS THAN (TO_DATE('01-APR-2004','DD-MON-YYYY'))
TABLESPACE data_0104_tbs UPDATE GLOBAL INDEXES;
ALTER TABLE daily_transactions ADD PARTITION;
ALTER TABLE daily_transactions
ADD PARTITION Alaska VALUES ('AK');
ALTER TABLE daily_transactions
add PARTITION SALES_2004_Q1 VALUES LESS THAN
(TO_DATE('01-APR-2004','DD-MON-YYYY')) SUBPARTITIONS 4;
alter table: Table Partition – Merge
ALTER TABLE store_sales
MERGE PARTITIONS Oklahoma, texas
INTO PARTITION oktx;
alter table: Table Partition – Move
ALTER TABLE store_sales MOVE PARTITION sales_overflow TABLESPACE
new_sales_overflow STORAGE (INITIAL 100m NEXT 100m PCTINCREASE 0)
UPDATE GLOBAL INDEXES;
alter table: Table Partition – Remove
ALTER TABLE store_sales DROP PARTITION sales_q1_04 UPDATE GLOBAL INDEXES;
alter table: Table Partition – Rename
ALTER TABLE store_sales RENAME PARTITION sales_q1 TO sales_first_quarter;
alter table: Table Partition – Split
ALTER TABLE store_sales
SPLIT PARTITION sales_overflow AT
(TO_DATE('01-FEB-2004','DD-MON-YYYY') )
INTO (PARTITION sales_q4_2003,
PARTITION sales_overflow)
UPDATE GLOBAL INDEXES;
ALTER TABLE composite_sales SPLIT PARTITION sales_q1
AT (TO_DATE('15-FEB-2003','DD-MON-YYYY'))
INTO (PARTITION sales_q1_01 SUBPARTITIONS 4
STORE IN (q1_01_tab1, q1_01_tab2, q1_01_tab3, q1_01_tab4),
PARTITION sales_q1_02 SUBPARTITIONS 4
STORE IN (q1_02_tab1, q1_02_tab2, q1_02_tab3, q1_02_tab4) )
UPDATE GLOBAL INDEXES;
alter table: Table Partition – Truncate
ALTER TABLE store_sales TRUNCATE PARTITION sales_overflow
UPDATE GLOBAL INDEXES;
alter table: Table Properties
ALTER TABLE parts PCTFREE 10 PCTUSED 60;
ALTER TABLE parts STORAGE (NEXT 1M);
ALTER TABLE parts PARALLEL 4;
alter table: Triggers – Modify Status
ALTER TABLE parts DISABLE ALL TRIGGERS;
ALTER TABLE parts ENABLE ALL TRIGGERS;
alter tablespace: Backups
ALTER TABLESPACE my_data_tbs BEGIN BACKUP;
ALTER TABLESPACE my_data_tbs END BACKUP;
alter tablespace: Data Files and Tempfiles
ALTER TABLESPACE mytbs
ADD DATAFILE '/ora100/oracle/mydb/mydb_mytbs_01.dbf' SIZE 100M;
12 Portable DBA: Oracle
ALTER TABLESPACE mytemp
ADD TEMPFILE '/ora100/oracle/mydb/mydb_mytemp_01.dbf'
SIZE 100M;
ALTER TABLESPACE mytemp AUTOEXTEND OFF;
ALTER TABLESPACE mytemp AUTOEXTEND ON NEXT 100m MAXSIZE 1G;
alter tablespace: Rename
ALTER TABLESPACE my_data_tbs RENAME TO my_newdata_tbs;
alter tablespace: Tablespace Management
ALTER TABLESPACE my_data_tbs DEFAULT
STORAGE (INITIAL 100m NEXT 100m FREELISTS 3);
ALTER TABLESPACE my_data_tbs MINIMUM EXTENT 500k;
ALTER TABLESPACE my_data_tbs RESIZE 100m;
ALTER TABLESPACE my_data_tbs COALESCE;
ALTER TABLESPACE my_data_tbs OFFLINE;
ALTER TABLESPACE my_data_tbs ONLINE;
ALTER TABLESPACE mytbs READ ONLY;
ALTER TABLESPACE mytbs READ WRITE;
ALTER TABLESPACE mytbs FORCE LOGGING;
ALTER TABLESPACE mytbs NOLOGGING;
ALTER TABLESPACE mytbs FLASHBACK ON;
ALTER TABLESPACE mytbs FLASHBACK OFF;
ALTER TABLESPACE mytbs RETENTION GUARANTEE;
ALTER TABLESPACE mytbs RETENTION NOGUARANTEE;
alter trigger
ALTER TRIGGER tr_my_trigger DISABLE;
ALTER TRIGGER tr_my_trigger ENABLE;
ALTER TRIGGER tr_my_trigger RENAME TO tr_new_my_trigger;
ALTER TRIGGER tr_my_trigger COMPILE;
alter user: Change Password
ALTER USER olduser IDENTIFIED BY newpassword;
ALTER USER olduser IDENTIFIED EXTERNALLY;
alter user: Password and Account Management
ALTER USER olduser PASSWORD EXPIRE;
ALTER USER olduser ACCOUNT LOCK;
ALTER USER olduser ACCOUNT UNLOCK;
alter user: Profile
ALTER USER olduser PROFILE admin_profile;
Chapter 1: DBA Cheat Sheet 13
alter user: Quotas
ALTER USER olduser QUOTA UNLIMITED ON users;
ALTER USER olduser QUOTA 10000M ON USERS;
alter user: Roles
ALTER USER olduser DEFAULT ROLE admin_role;
ALTER USER olduser DEFAULT ROLE NONE;
ALTER USER olduser DEFAULT ROLE ALL EXCEPT admin_role;
alter user: Tablespace Assignments
ALTER USER olduser DEFAULT TABLESPACE users;
ALTER USER olduser TEMPORARY TABLESPACE temp;
alter view: Constraints
ALTER VIEW my_view
ADD CONSTRAINT u_my_view_01 UNIQUE (empno)
RELY DISABLE NOVALIDATE;
ALTER VIEW my_view DROP CONSTRAINT u_my_view_01;
ALTER VIEW my_view DROP PRIMARY KEY;
ALTER VIEW my_view MODIFY CONSTRAINT u_my_view_01 NORELY;
ALTER VIEW my_view MODIFY CONSTRAINT u_my_view_01 RELY;
alter view: Recompile
ALTER VIEW my_view RECOMPILE;
analyze: Analyze Cluster
ANALYZE CLUSTER my_cluster_tab COMPUTE STATISTICS FOR ALL ROWS;
ANALYZE CLUSTER my_cluster_tab
ESTIMATE STATISTICS SAMPLE 10000 ROWS FOR ALL ROWS;
analyze: Analyze Index
ANALYZE INDEX ix_tab_01 COMPUTE STATISTICS FOR ALL ROWS;
ANALYZE INDEX ix_tab_01
ESTIMATE STATISTICS SAMPLE 10000 ROWS FOR ALL ROWS;
analyze: Analyze Table
ANALYZE TABLE mytab COMPUTE STATISTICS
FOR ALL INDEXED COLUMNS SIZE 100;
ANALYZE TABLE mytab COMPUTE STATISTICS
FOR ALL INDEXES;
14 Portable DBA: Oracle
audit
AUDIT ALL ON scott.emp;
AUDIT UPDATE, DELETE ON scott.emp;
AUDIT SELECT on scott.emp WHENEVER NOT SUCCESSFUL;
AUDIT INSERT, UPDATE, DELETE ON DEFAULT;
comment
COMMENT ON TABLE scott.mytab IS
'This is a comment on the mytab table';
COMMENT ON COLUMN scott.mytab.col1 IS
'This is a comment on the col1 column';
COMMENT ON MATERIALIZED VIEW scott.mview IS
'This is a comment on the materialized view mview';
create cluster
CREATE CLUSTER pub_cluster (pubnum NUMBER)
SIZE 8K PCTFREE 10 PCTUSED 60 TABLESPACE user_data;
CREATE CLUSTER pub_cluster (pubnum NUMBER)
SIZE 8K HASHKEYS 1000 PCTFREE 10 PCTUSED 60
TABLESPACE user_data;
create control file
CREATE CONTROLFILE REUSE DATABASE "mydb"
NORESETLOGS NOARCHIVELOG
MAXLOGFILES 32 MAXLOGMEMBERS 3
MAXDATAFILES 200 MAXINSTANCES 1
MAXLOGHISTORY 1000
LOGFILE
GROUP 1 ('/ora01/oracle/mydb/mydb_redo1a.rdo',
'/ora02/oracle/mydb/mydb_redo1b.rdo') SIZE 500K,
GROUP 2 ('/ora01/oracle/mydb/mydb_redo2a.rdo',
'/ora01/oracle/mydb/mydb_redo2b.rdo') SIZE 500K
DATAFILE
'/ora01/oracle/mydb/mydb_system_01.dbf ',
'/ora01/oracle/mydb/mydb_users_01.dbf ',
'/ora01/oracle/mydb/mydb_undo_01.dbf ',
'/ora01/oracle/mydb/mydb_sysaux_01.dbf ',
'/ora01/oracle/mydb/mydb_alldata_01.dbf ';
create database
CREATE DATABASE prodb
MAXINSTANCES 1 MAXLOGHISTORY 1
MAXLOGFILES 5 MAXLOGMEMBERS 3
MAXDATAFILES 100
Chapter 1: DBA Cheat Sheet 15
16 Portable DBA: Oracle
DATAFILE 'C:\oracle\ora92010\prodb\system01.dbf'
SIZE 250M REUSE AUTOEXTEND ON NEXT 10240K
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL DEFAULT
TEMPORARY TABLESPACE TEMP
TEMPFILE 'C:\oracle\ora92010\prodb\temp01.dbf'
SIZE 40M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SYSAUX TABLESPACE
DATAFILE 'C:\oracle\ora92010\prodb\sysauxtbs01.dbf'
SIZE 300M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
UNDO TABLESPACE "UNDOTBS1"
DATAFILE 'C:\oracle\ora92010\prodb\undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE
GROUP 1 ('C:\oracle\ora92010\prodb\redo01.log') SIZE 102400K,
GROUP 2 ('C:\oracle\ora92010\prodb\redo02.log') SIZE 102400K,
GROUP 3 ('C:\oracle\ora92010\prodb\redo03.log') SIZE 102400K;
create database link
CREATE DATABASE LINK my_db_link
CONNECT TO current_user
USING 'my_db';
CREATE PUBLIC DATABASE LINK my_db_link
CONNECT TO remote_user IDENTIFIED BY psicorp
USING 'my_db';
create directory
CREATE OR REPLACE DIRECTORY mydir AS
'/opt/oracle/admin/directories/mydir';
create function
CREATE OR REPLACE FUNCTION find_value_in_table
(p_value IN NUMBER, p_table IN VARCHAR2,
p_column IN VARCHAR2)
RETURN NUMBER IS
v_found NUMBER;
v_sql VARCHAR2(2000);
BEGIN
v_sql:='SELECT 1 FROM 'p_table' WHERE 'p_column
' = 'p_value;
execute immediate v_sql into v_found;
return v_found;
END;
/
create index: Function-Based Index
CREATE INDEX fb_upper_last_name_emp ON emp_info (UPPER(last_name) );
Chapter 1: DBA Cheat Sheet 17
create index: Global Partitioned Indexes
CREATE INDEX ix_part_my_tab_01 ON store_sales (invoice_number)
GLOBAL PARTITION BY RANGE (invoice_number)
(PARTITION part_001 VALUES LESS THAN (1000),
PARTITION part_002 VALUES LESS THAN (10000),
PARTITION part_003 VALUES LESS THAN (MAXVALUE) );
CREATE INDEX ix_part_my_tab_02 ON store_sales
(store_id, time_id)
GLOBAL PARTITION BY RANGE (store_id, time_id)
(PARTITION PART_001 VALUES LESS THAN
(1000, TO_DATE('04-01-2003','MM-DD-YYYY') )
TABLESPACE partition_001
STORAGE (INITIAL 100M NEXT 200M PCTINCREASE 0),
PARTITION part_002 VALUES LESS THAN
(1000, TO_DATE('07-01-2003','MM-DD-YYYY') )
TABLESPACE partition_002
STORAGE (INITIAL 200M NEXT 400M PCTINCREASE 0),
PARTITION part_003 VALUES LESS THAN (maxvalue, maxvalue)
TABLESPACE partition_003 );
create index: Local Partitioned Indexes
CREATE INDEX ix_part_my_tab_01 ON my_tab
(col_one, col_two, col_three)
LOCAL (PARTITION tbs_part_01 TABLESPACE part_tbs_01,
PARTITION tbs_part_02 TABLESPACE part_tbs_02,
PARTITION tbs_part_03 TABLESPACE part_tbs_03,
PARTITION tbs_part_04 TABLESPACE part_tbs_04);
CREATE INDEX ix_part_my_tab_01 ON my_tab (col_one, col_two, col_three)
LOCAL STORE IN (part_tbs_01, part_tbs_02, part_tbs_03, part_tbs_04);
CREATE INDEX ix_part_my_tab_01 ON my_tab (col_one, col_two, col_three)
LOCAL STORE IN (
part_tbs_01 STORAGE (INITIAL 10M NEXT 10M MAXEXTENTS 200),
part_tbs_02,
part_tbs_03 STORAGE (INITIAL 100M NEXT 100M MAXEXTENTS 200),
part_tbs_04 STORAGE (INITIAL 1000M NEXT 1000M MAXEXTENTS 200));
create index: Local Subpartitioned Indexes
CREATE INDEX sales_ix ON store_sales(time_id, store_id)
STORAGE (INITIAL 1M MAXEXTENTS UNLIMITED) LOCAL
(PARTITION q1_2003,
PARTITION q2_2003,
PARTITION q3_2003
(SUBPARTITION pq3200301, SUBPARTITION pq3200302,
SUBPARTITION pq3200303, SUBPARTITION pq3200304,
SUBPARTITION pq3200305),
PARTITION q4_2003
(SUBPARTITION pq4200301 TABLESPACE tbs_1,
SUBPARTITION pq4200302 TABLESPACE tbs_1,
SUBPARTITION pq4200303 TABLESPACE tbs_1,
SUBPARTITION pq4200304 TABLESPACE tbs_1,
SUBPARTITION pq4200305 TABLESPACE tbs_1,
SUBPARTITION pq4200306 TABLESPACE tbs_1,
SUBPARTITION pq4200307 TABLESPACE tbs_1,
SUBPARTITION pq4200308 TABLESPACE tbs_1),
PARTITION sales_overflow
(SUBPARTITION pqoflw01 TABLESPACE tbs_2,
SUBPARTITION pqoflw02 TABLESPACE tbs_2,
SUBPARTITION pqoflw03 TABLESPACE tbs_2,
SUBPARTITION pqoflw04 TABLESPACE tbs_2));
create index: Nonpartitioned Indexes
CREATE INDEX ix_mytab_01 ON mytab(column_1);
CREATE UNIQUE INDEX ix_mytab_01 ON mytab(column_1, column_2, column_3);
CREATE INDEX ix_mytab_01 ON mytab(column_1, column_2, column_3)
TABLESPACE my_indexes COMPRESS
STORAGE (INITIAL 10K NEXT 10K PCTFREE 10) COMPUTE STATISTICS;
CREATE BITMAP INDEX bit_mytab_01 ON my_tab(col_two)
TABLESPACE my_tbs;
create materialized view
CREATE MATERIALIZED VIEW emp_dept_mv1
TABLESPACE users BUILD IMMEDIATE
REFRESH FAST ON COMMIT WITH ROWID
ENABLE QUERY REWRITE AS
SELECT d.rowid deptrowid, e.rowid emprowid,
e.empno, e.ename, e.job, d.loc
FROM dept d, emp e
WHERE d.deptno = e.deptno;
CREATE MATERIALIZED VIEW emp_dept_mv3
TABLESPACE users BUILD IMMEDIATE
REFRESH FAST ON COMMIT WITH ROWID
DISABLE QUERY REWRITE AS
SELECT d.rowid deptrowid, e.rowid emprowid,
d.dname, d.loc, e.ename, e.job
FROM dept d, emp e
WHERE d.deptno (+) = e.deptno;
create materialized view: Partitioned Materialized View
CREATE MATERIALIZED VIEW part_emp_mv1
PARTITION BY RANGE (hiredate)
(PARTITION month1
VALUES LESS THAN (TO_DATE('01-APR-1981', 'DD-MON-YYYY'))
PCTFREE 0 PCTUSED 99
STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0)
TABLESPACE users,
18 Portable DBA: Oracle
PARTITION month2
VALUES LESS THAN (TO_DATE('01-DEC-1981', 'DD-MON-YYYY'))
PCTFREE 0 PCTUSED 99
STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0)
TABLESPACE users,
PARTITION month3
VALUES LESS THAN (TO_DATE('01-APR-1988', 'DD-MON-YYYY'))
PCTFREE 0 PCTUSED 99
STORAGE (INITIAL 64k NEXT 16k PCTINCREASE 0)
TABLESPACE users)
BUILD IMMEDIATE REFRESH FAST ENABLE QUERY REWRITE AS
SELECT hiredate, count(*) as hires
FROM emp
GROUP BY hiredate;
create materialized view log
CREATE MATERIALIZED VIEW LOG ON emp
TABLESPACE users
WITH PRIMARY KEY, SEQUENCE,
(ename, job, mgr, hiredate, sal, comm, deptno)
INCLUDING NEW VALUES;
create package/create package body
CREATE OR REPLACE PACKAGE get_Tomdate_pkg IS
FUNCTION GetTomdate RETURN DATE;
PRAGMA RESTRICT_REFERENCES (GetTomdate, WNDS);
PROCEDURE ResetSysDate;
END get_Tomdate_pkg;
/
CREATE OR REPLACE PACKAGE BODY get_Tomdate_pkg IS
v_Sysdate DATE := TRUNC(SYSDATE);
FUNCTION GetTomdate RETURN DATE IS
BEGIN
RETURN v_sysdate+1;
END GetTomdate;
PROCEDURE ResetSysdate IS
BEGIN
v_Sysdate := SYSDATE;
END ResetSysdate;
END get_Tomdate_pkg;
/
create pfile
CREATE PFILE FROM SPFILE;
CREATE PFILE='/opt/oracle/admin/mydb/pfile/initmybd.ora'
FROM SPFILE='/opt/oracle/admin/mydb/pfile/spfilemybd.ora';
Chapter 1: DBA Cheat Sheet 19
create procedure
CREATE OR REPLACE PROCEDURE new_emp_salary
(p_empid IN NUMBER, p_increase IN NUMBER)
AS
BEGIN
UPDATE emp SET salary=salary*p_increase
WHERE empid=p_empid;
END;
/
create profile
CREATE PROFILE development_profile
LIMIT
SESSIONS_PER_USER 2 CONNECT_TIME 100000 IDLE_TIME 100000
LOGICAL_READS_PER_SESSION 1000000
PRIVATE_SGA 10m
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LIFE_TIME 60
PASSWORD_REUSE_TIME 365
PASSWORD_REUSE_MAX 3
PASSWORD_LOCK_TIME 30
PASSWORD_GRACE_TIME 5;
create role
CREATE ROLE developer_role IDENTIFIED USING develop;
create rollback segment
CREATE ROLLBACK SEGMENT r01 TABLESPACE RBS
STORAGE (INITIAL 100m NEXT 100M MINEXTENTS 5 OPTIMAL 500M);
create sequence
CREATE SEQUENCE my_seq
START WITH 1 INCREMENT BY 1 MAXVALUE 1000000 CYCLE CACHE;
create spfile
CREATE SPFILE FROM PFILE;
CREATE SPFILE='/opt/oracle/admin/mydb/pfile/spfilemybd.ora'
FROM PFILE='/opt/oracle/admin/mydb/pfile/initmybd.ora';
create synonym
CREATE SYNONYM scott_user.emp FOR scott.EMP;
CREATE PUBLIC SYNONYM emp FOR scott.EMP;
20 Portable DBA: Oracle
create table
CREATE TABLE my_tab
(id NUMBER, current_value VARCHAR2(2000) ) COMPRESS;
CREATE TABLE parts (id NUMBER, version NUMBER, name VARCHAR2(30),
Bin_code NUMBER, upc NUMBER, active_code VARCHAR2(1) NOT NULL
CONSTRAINT ck_parts_active_code_01
CHECK (UPPER(active_code)= 'Y' or UPPER(active_code)='N'),
CONSTRAINT pk_parts PRIMARY KEY (id, version)
USING INDEX TABLESPACE parts_index
STORAGE (INITIAL 1m NEXT 1m) )
TABLESPACE parts_tablespace
PCTFREE 20 PCTUSED 60 STORAGE ( INITIAL 10m NEXT 10m PCTINCREASE 0);
create tablespace: Permanent Tablespace
CREATE TABLESPACE data_tbs
DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf'
SIZE 100m;
CREATE TABLESPACE data_tbs
DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf'
SIZE 100m FORCE LOGGING BLOCKSIZE 8k;
CREATE TABLESPACE data_tbs
DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf'
SIZE 100m NOLOGGING
DEFAULT COMPRESS EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
CREATE TABLESPACE data_tbs
DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf'
SIZE 100m NOLOGGING
DEFAULT COMPRESS EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;
CREATE BIGFILE TABLESPACE data_tbs
DATAFILE '/opt/oracle/mydbs/data/mydbs_data_tbs_01.dbf'
SIZE 10G;
create tablespace: Temporary Tablespace
CREATE TABLESPACE temp_tbs
TEMPFILE '/opt/oracle/mydbs/data/mydbs_temp_tbs_01.tmp'
SIZE 100m;
create tablespace: Undo Tablespace
CREATE TABLESPACE undo_tbs
TEMPFILE '/opt/oracle/mydbs/data/mydbs_undo_tbs_01.tmp'
SIZE 1g RETENTION GUARANTEE;
Chapter 1: DBA Cheat Sheet 21
22 Portable DBA: Oracle
create trigger
CREATE OR REPLACE TRIGGER emp_comm_after_insert
BEFORE INSERT ON emp FOR EACH ROW
DECLARE
v_sal number;
v_comm number;
BEGIN
-- Find username of person performing the INSERT into the table
v_sal:=:new.salary;
:new.comm:=v_sal*.10;
END;
/
create user
CREATE USER Robert IDENTIFIED BY Freeman
DEFAULT TABLESPACE users_tbs
TEMPORARY TABLESPACE temp
QUOTA 100M ON users_tbs
QUOTA UNLIMITED ON data_tbs;
create view
CREATE OR REPLACE VIEW vw_emp_dept_10 AS
SELECT * FROM EMP WHERE dept=10;
CREATE OR REPLACE VIEW vw_public_email AS
SELECT ename_first, ename_last, email_address
FROM EMP WHERE public='Y'
delete
DELETE FROM emp WHERE empid=100;
DELETE FROM emp e WHERE e.rowid >
(SELECT MIN (esub.ROWID) FROM emp esub
WHERE e.empid=esub.empid);
drop cluster
DROP CLUSTER scott.emp_cluster
INCLUDING TABLES CASCADE CONSTRAINTS;
drop database
DROP DATABASE;
drop database link
DROP DATABASE LINK my_db_link;
DROP PUBLIC DATABASE LINK my_db_link;
Chapter 1: DBA Cheat Sheet 23
drop directory
DROP DIRECTORY mydir;
drop function
DROP FUNCTION find_value_in_table;
drop index
DROP INDEX ix_my_tab;
drop materialized view
DROP MATERIALIZED VIEW my_mview;
DROP MATERIALIZED VIEW my_mview PRESERVE TABLE;
drop materialized view log
DROP MATERIALIZED VIEW LOG ON mytab;
drop package/drop package body
DROP PACKAGE scott.my_package
DROP PACKAGE BODY scott.my_package;
drop procedure
DROP PROCEDURE my_proc;
drop profile
DROP PROFILE my_profile CASCADE;
drop role
DROP ROLE my_role;
drop rollback segment
DROP ROLLBACK SEGMENT rbs01;
drop sequence
DROP SEQUENCE my_seq;
drop synonym
DROP SYNONYM my_synonym;
DROP PUBLIC SYNONYM my_synonym;
drop table
DROP TABLE my_tab;
DROP TABLE my_tab CASCADE CONSTRAINTS;
DROP TABLE my_tab CASCADE CONSTRAINTS PURGE;
drop tablespace
DROP TABLESPACE my_tbs;
DROP TABLESPACE my_tbs INCLUDING CONTENTS;
DROP TABLESPACE my_tbs INCLUDING CONTENTS
AND DATAFILES CASCADE CONSTRAINTS;
drop trigger
DROP TRIGGER my_trigger;
drop user
DROP USER my_user CASCADE;
drop view
DROP VIEW my_view CASCADE CONSTRAINTS;
explain plan
EXPLAIN PLAN SET STATEMENT_ID='TEST' FOR
SELECT * FROM emp WHERE EMPID=100;
flashback database
FLASHBACK DATABASE TO SCN 10000;
FLASHBACK DATABASE TO TIMESTAMP SYSDATE – 1/24;
FLASHBACK DATABASE TO BEFORE TIMESTAMP SYSDATE – 1/24;
flashback table
FLASHBACK TABLE my_tab TO SCN 10000;
FLASHBACK TABLE my_tab TO TIMESTAMP SYSDATE – 1/24
ENABLE TRIGGERS;
FLASHBACK TABLE my_tab TO BEFORE DROP;
FLASHBACK TABLE my_tab TO BEFORE DROP RENAME TO rec_tab;
grants: Object Grants
GRANT SELECT ON scott.my_tab TO my_user;
GRANT INSERT, UPDATE, SELECT ON scott.my_tab TO my_user;
24 Portable DBA: Oracle
Chapter 1: DBA Cheat Sheet 25
GRANT SELECT ON scott.my_tab TO my_user WITH GRANT OPTION;
GRANT SELECT ON scott.my_tab TO PUBLIC WITH GRANT OPTION;
grants: System Grants
GRANT CREATE TABLE to my_user;
GRANT CREATE ANY TABLE to my_user WITH ADMIN OPTION;
GRANT ALL PRIVILEGES to my_user WITH ADMIN OPTION;
insert
INSERT INTO dept VALUES (100, 'Marketing', 'Y');
INSERT INTO dept (deptid, dept_name, active)
VALUES (100, 'Marketing', 'Y');
INSERT INTO emp_history SELECT * FROM emp a
WHERE a.empid NOT IN (SELECT empid FROM emp_history);
INSERT INTO emp_pay_summary
SELECT empid, sum(gross_pay) FROM emp_pay_history
GROUP BY empid;
INSERT ALL
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date, deptid, mon_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+1, deptid, tue_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+2, deptid, wed_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+3, deptid, thur_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+4, deptid, fri_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+5, deptid, sat_sales)
INTO store_sales (store_id, sales_date, deptid, sales_amt)
VALUES (store_id, start_date+6, deptid, sun_sales)
SELECT store_id, start_date, deptid, mon_sales, tue_sales,
wed_sales, thur_sales, fri_sales, sat_sales, sun_sales
FROM store_sales_load;
INSERT ALL
WHEN store_id <>= 100 THEN INTO west_stores
ELSE INTO misc_stores
SELECT * FROM store_sales_load;
INSERT /*+ APPEND */ INTO emp VALUES (100,
'Jacob','Freeman',1000,20, null, 10, sysdate, 100,
sysdate+365);
lock table
LOCK TABLE my_table IN EXCLUSIVE MODE NOWAIT;
LOCK TABLE my_table IN ROW EXCLUSIVE MODE;
26 Portable DBA: Oracle
merge
MERGE INTO emp_retire A
USING (SELECT empno, ename_last, ename_first, salary
FROM emp WHERE retire_cd='Y') B
ON (a.empid=b.empid)
WHEN MATCHED THEN UPDATE SET
a.ename_last=b.ename_last,
a.ename_first=b.ename_first,
a.salary=b.salary
DELETE WHERE (b.retire_cd='D')
WHEN NOT MATCHED THEN INSERT
(a.empid, a.ename_last, a.ename_first, a.salary)
VALUES (b.empid, b.ename_last, b.ename_first, b.salary)
WHERE (b.retire_cd!='D');
noaudit
NOAUDIT ALL ON scott.emp;
NOAUDIT UPDATE, DELETE ON scott.emp;
NOAUDIT SELECT on scott.emp WHENEVER NOT SUCCESSFUL;
NOAUDIT INSERT, UPDATE, DELETE ON DEFAULT;
purge
PURGE TABLE my_tab;
PURGE INDEX ix_my_tab;
PURGE RECYCLEBIN;
PURGE DBA_RECYCLEBIN;
PURGE TABLESPACE data_tbs USER scott;
recover
RECOVER DATABASE;
RECOVER TABLESPACE user_data, user_index;
RECOVER DATAFILE
'/opt/oracle/admin/mydb/datafile/mydb_users_01.dbf';
RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;
RECOVER DATABASE UNTIL CHANGE 94044;
RECOVER DATABASE UNTIL TIME '2004-08-01:22:00:04';
rename
RENAME my_table to my_tab;
revoke: Object Grants
REVOKE SELECT ON scott.my_tab FROM my_user;
REVOKE INSERT, UPDATE, SELECT ON scott.my_tab FROM my_user;
REVOKE SELECT ON scott.my_tab FROM my_user;
REVOKE SELECT ON scott.my_tab FROM PUBLIC;
Chapter 1: DBA Cheat Sheet 27
revoke: System Grants
REVOKE CREATE TABLE FROM my_user;
REVOKE CREATE ANY TABLE FROM my_user;
REVOKE ALL PRIVILEGES FROM my_user;
rollback
ROLLBACK;
savepoint
SAVEPOINT alpha;
select
SELECT ename_last, dname
FROM emp a, dept b
WHERE a.deptid=b.deptid;
SELECT a.empid, b.dept_name
FROM emp a, dept b
WHERE a.deptid=b.deptid (+);
SELECT a.empid, b.dept_name
FROM emp a LEFT OUTER JOIN dept b
ON a.deptid=b.deptid;
SELECT * FROM dept WHERE EXISTS
(SELECT * FROM emp
WHERE emp.deptid=dept.deptid
AND emp.salary > 100);
SELECT ename_first, ename_last,
CASE deptid
WHEN 10 THEN 'Acounting' WHEN 20 THEN 'Sales'
ELSE 'None' END FROM emp;
SELECT empid, ename_last, salary, comm
FROM emp a
WHERE salary*.10 > (SELECT AVG(comm) FROM emp z
WHERE a.deptid=z.deptid);
WITH avg_dept_sales AS (
SELECT a.deptid, avg(b.sales_amt) avg_sales
FROM emp a, dept_sales b
WHERE a.deptid=b.deptid
GROUP BY a.deptid),
emp_salaries AS
(SELECT empid, AVG(salary) avg_salary FROM emp
GROUP BY empid)
SELECT * FROM emp_salaries b WHERE avg_salary*.05 >
(SELECT avg_sales FROM avg_dept_sales);
SELECT /*+ INDEX (a, emp_last_name_ix) */ empid
FROM emp a WHERE ename_last='Freeman'
SELECT empid, TO_CHAR(retire_date, 'MM/DD/YYYY')
FROM emp
WHERE retire_date IS NOT NULL
ORDER BY retire_date
SELECT empid, COUNT(*)
FROM emp
GROUP BY empid
HAVING COUNT(*) > 1;
SELECT empid, salary FROM emp
AS OF TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' DAY)
WHERE empid=20;
SELECT empid, salary FROM emp
VERSIONS BETWEEN
TIMESTAMP SYSTIMESTAMP - INTERVAL '1' DAY AND
SYSTIMESTAMP - INTERVAL '1' HOUR
WHERE empid=20;
set constraints
SET CONSTRAINTS ALL IMMEDIATE;
SET CONSTRAINTS ALL DEFERRED;
SET CONSTRAINT fk_my_tab DEFERRED;
set transaction
SET TRANSACTION USE ROLLBACK SEGMENT rbs01;
SET TRANSACTION READ ONLY;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
truncate
TRUNCATE TABLE my_tab;
TRUNCATE TABLE my_tab PRESERVE MATERIALIZED VIEW LOG;
TRUNCATE TABLE my_tab REUSE STORAGE;
TRUNCATE TABLE my_tab DROP STORAGE;
update
UPDATE emp SET salary=100 WHERE empid=100;
UPDATE emp SET salary=NULL, retire_date=SYSDATE
WHERE empid=100;
UPDATE emp SET salary=salary*1.10
WHERE deptid IN
(SELECT deptid FROM dept WHERE dept_name = 'Sales');
UPDATE emp a SET (salary, comm)=
(SELECT salary*1.10, comm*1.10
FROM emp b WHERE a.empid=b.empid);
INSERT INTO store_sales
PARTITION (store_sales_jan_2004) sa
SET sa.sales_amt=1.10 where store_id=100;
28 Portable DBA: Oracle

June 17, 2008

All about storage: SAN, RAID, DISKS – SQL Server DBA

One of the easiest ways to improve the lifetime performance of a SQL Server database is proper setup of the physical and logical drives. While it's an easy technique, proper disk subsystem configuration is often overlooked or handled by a member of the IT staff other than a SQL Server DBA.

Storage specific terms that everyone should know:
.RAID – Redundant Array of Inexpensive Disks, also known as Redundant Array of Independent Disks.
.Disk subsystem – A general term that refers to the disks on the server.
.Spindle – Spindles are another way to refer to the physical disk drives that make up the RAID array.
· I/O Ops – Input/Output operations, usually measured per second.
· Queuing – Number of I/O Ops that are pending completion by the disk subsystem.
· SAN – Storage area networks are collections of storage devices and fibre switches connected together along with the servers that access the storage on the device. SAN has also become a generic term, which refers to the physical storage drives such as EMC, 3PAR and Hitachi.
· LUN – Logical Unit Number – This is the identification number assigned to a volume when created on a SAN device.
· Physical drive – How Windows sees any RAID array, single drive or LUN that is attached to the server.
· Logical drive – How Windows presents drives to the user (C:, D:, E:, etc.).
· Block size – The amount of data read from the spindles in a single read operation. This size varies per vendor from 8 KB to 256 MB.
· Hardware array – A RAID array created using a physical RAID controller.
· Software array – A RAID array created within Windows using the computer management snap-in.
· Hot spare – A spindle that sits in the drive cage and is added to the array automatically in the event of a drive failure. While this does not increase capacity, it does reduce the amount of time that the array is susceptible to data loss because of a second failed drive.
· Recovery time – Amount of time needed for the RAID array to become fully redundant after a failed drive has been replaced, either manually or automatically via a hot spare.

RAID system:
RAID 10 – Mirrored Strip Sets. A RAID 10 array is most useful for high read or high write operations. RAID 10 is extremely fast; however, it is also extremely expensive (compared to the other RAID levels available). In basic terms, a RAID 10 array is several RAID 1 arrays stripped together for performance. As with a RAID 1 array, as data is written to the active drive in the pair, it is also written to the secondary drive in the pair. A RAID 10 array can survive several drive failures so long as no two drives in a single pair are lost.

RAID Level 0: "Disk Striping" RAID 0: Also known as "Disk Striping", is technically not a RAID level since it provides no fault tolerance. (Parity) Data is written in blocks across multiple drives, so one drive can be writing or reading a block while the next is seeking the next block. The advantages of RAID 0 is it’s high level of I/O performance because the I/Os are spread across multiple channels and drives. Because RAID 0 offers no fault tolerance, if one drive should fail, this will result a loss of all your data. This should never be used in a environment where data is mission critical. RAID 1 – Mirror. A RAID 1 array is most useful for high write files, such as the page file, transaction logs and tempdb database. A RAID 1 array takes two physical disks and creates an exact duplicate of the primary drive on the backup drive. There is no performance gain or loss when using a RAID 1 array. This array can survive a single drive failure without incurring any data loss.












SAN considerations for your SQL Server environment
· The SANs sole function is to store data and offer high reliability and high performance access to this data. It is a network which provides high-speed, highly reliable transportation of data for multiple servers which generally connect through a high speed optical network called fibre. A SAN consists of many high performance hard drives (typically several hundred) aggregated together with high performance controllers and caching; these hard drives are virtualized so that the consumer does not know which hard drives a SQL Server or other device connected to the SAN will access. A SAN administrator will present blocks of storage to servers using the SAN and these blocks of storage can consist of a single hard drive, multiple hard drives or portions of hard drives in a logical unit called a LUN (Logical Unit Number).

SANS have several advantages over locally attached storage:
· There is a distance limitation for SCSI connections which the SAN fibre network overcomes.
· Most SANs provide features which allow you to clone, snapshot, or rapidly move data (replicate) from one location to another. File copies or bcp over your network simply are not scalable. This increases their usefulness for disaster recovery.
· SANs play well with clusters. Clusters share resources between the nodes that form the cluster.
· SANs will allow a cluster or a server to boot off a SAN.
· SANs offer increased utilization of storage. With locally attached storage large amounts of disk space can be wasted. With a SAN you can expand or contract the amount of disk space a server or cluster can access.
· SANs will also offload some of the processing from the host system to the SAN itself.
Benefits of a SAN:
The primary benefits of a SAN are:
Availability: A single copy of data is accessible to any and all hosts via multiple paths.
Reliability: Dependable data transportation ensures a low error rate, and an ability to recover from failures is provided.
Scalability: Servers and storage devices may be added independently of one another, and do not depend on proprietary systems.
Performance: Fibre Channel (the standard method for SAN interconnectivity) has a 100MB/sec bandwidth and low overhead, and it separates storage and network I/O.
Manageability: Emerging software and standards for both FC-AL and Fibre Channel fabric allow single centralized management and proactive error detection and correction.
Return On Information Management: Due to increased redundancy and superior manageability, as well as the ability to add storage and servers independently of one another SANs provide a lower cost of ownership and a Higher Return On Information Management (ROIM).

When requesting storage from your SAN admin or Windows admin, tell them what RAID level and storage tier you need. That's important because the RAID level determines just how much read and write I/O you need, as well as the amount of redundancy you require. It's important that a DBA is involved in the decision process for RAID level volumes -- after all, you know the data within the system much better than any storage engineer does.
.

June 05, 2008

Basic steps to improve SQL Server Performance

1. Rebuild your indexes
2. Update statistics(full) - EXEC SP_UPDATESTATS
3. Recompile stored procedure sp_recompile SP_Name
A lot of times developers change table data types and forget to change the stored procedure, thus the optimizer cannot generate a good query plan.
4. Look at the graphical execution plan in Query Analyzer (select SQL statement or statements and press Ctrl-L) to get some idea of what part of a query is the most time-consuming. You might be able to rewrite the query to improve performance, or modify index structures.
5. Keep transactions as short as possible. This can be used to prevent deadlocks.
6. Try to avoid using SQL Server cursors, whenever possible.SQL Server cursors can results in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
If cursor cannnot avoided, then try to reduce the number of records to process in the cursor,try to reduce the number of columns to process in the cursor.

June 02, 2008

SQL SERVER 2005 Encryption

In general, encryption is a mechanism for protecting data, which applies to it a specially designed algorithm, effectively obfuscating its content by making it different from the original. Use of the algorithm involves a component known as the encryption key (a sequence of characters). The process can be reversed by applying an appropriate decryption key and equivalent algorithm, yielding the original data.
Microsoft SQL Server 2005 makes use of a key hierarchy, which helps to protect keys that are to be used for encryption. This hierarchy can best be viewed as a series of layers, in which each layer encrypts the layer below it. Figure shows a graphical representation of the key hierarchy.



Service Master Key
Each SQL Server 2005 installation has exactly one Service Master Key (SMK), which is generated at install time. The SMK directly or indirectly secures all other keys on the server, making it the "mother of all SQL Server encryption keys." The Windows Data Protection API (DPAPI), at the higher O/S level, uses the SQL Server service account credentials to automatically encrypt and secure the SMK.
The SMK can be backed up via the BACKUP SERVICE MASTER KEY T-SQL statement. This statement has the following format:
BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password'
you can use the RESTORE SERVICE MASTER KEY statement:
RESTORE SERVICE MASTER KEY FROM FILE = 'path_to_file' DECRYPTION BY PASSWORD = 'password' [FORCE]
Because it is automatically generated by SQL Server, there are no CREATE or DROP statements for the Service Master Key.
Database Master Keys
While each SQL Server has a single Service Master Key, each SQL database can have its own Database Master Key (DMK). The DMK is created using the CREATE MASTER KEY statement:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password'
The DMK is encrypted using the Service Master Key and stored in the master database; a feature known as "automatic key management."
Like the Service Master Key, you can backup and restore Database Master Keys.
BACKUP MASTER KEY TO FILE = 'path_to_file'
ENCRYPTION BY PASSWORD = 'password'
Restoring the Database Master Key requires that you use the DECRYPTION BY PASSWORD clause, which specifies the password previously used to encrypt the backup file. In addition you must use the ENCRYPTION BY PASSWORD clause, which gives SQL Server a password to encrypt the DMK after it is loaded in the database.
RESTORE MASTER KEY FROM FILE = 'path_to_file' DECRYPTION BY PASSWORD = 'password' ENCRYPTION BY PASSWORD = 'password' [ FORCE ]
To drop a DMK, use the DROP MASTER KEY statement:
DROP MASTER KEY
This statement drops the Database Master Key from the current database.

Certificates

There are a lot of options associated with the CREATE CERTIFICATE statement.
CREATE CERTIFICATE TestCertificate
ENCRYPTION BY PASSWORD = ‘abcd123@’
WITH SUBJECT = 'This is a test certificate',
START_DATE = '1/1/2006',
EXPIRY_DATE = '12/31/2008';

Encryption and Decryption by Certificate

Certificates can be used to encrypt and decrypt data directly by using the built-in EncryptByCert, DecryptByCert and Cert_ID functions.
The DecryptByCert function is used to decrypt data that was previously encrypted by certificate.

EncryptByCert function, which takes two parameters - the certificate identifier (which can be derived from the certificate name using the Cert_ID T-SQL function) and the string of characters (of nvarchar, char, wchar, varchar or nchar datatype) to be encrypted with it, such as in the following sample statement:
EncryptByCert ( Cert_ID('TestCertificate'), 'Test’)

The DecryptByCert T-SQL function, which returns original data, also requires a certificate identifier and the encrypted string of characters (if the certificate definition included a password, you would need to provide it here as well). Note that you can restrict the ability to decrypt encrypted data either by making the password known to a limited number of users or by applying an AUTHORIZATION clause when creating it (which specifies its owner).
DecryptByCert(Cert_ID('TestCertificate'), @decrypt_data)
The whole process can be illustrated as:
DECLARE @encrypted NVARCHAR(100)
SELECT @encrypted = EncryptByCert(Cert_ID('TestCertificate'), N'Test’)
SELECT @encrypted
SELECT CAST(DecryptByCert(Cert_ID('TestCertificate'), @encrypted) AS NVARCHAR)

Symmetric Keys
You can use certificates to create symmetric keys for encryption and decryption within the database.
CREATE SYMMETRIC KEY CCKey
WITH ALGORITHM = TRIPLE_DES
ENCRYPTION BY CERTIFICATE TestCertificate

SQL Server provides a set of functions to encrypt and decrypt data by symmetric key. These functions are EncryptByKey, DecryptByKey and Key_GUID.

The EncryptByKey function requires a reference to the symmetric key GUID in order to encrypt data.
The DecryptByKey function performs the reverse of EncryptByKey. This function decrypts your previously encrypted data.
To see that we have indeed created the intended key and to list all keys within the current database, we can use the following command, which selects all rows from the system catalog view named symmetric_keys:
SELECT * FROM sys.symmetric_keys;

Storing / Retrieving Encrypted Data
CREATE TABLE SymmetricTable(
Id INT IDENTITY(1,1) PRIMARY KEY,
PlainText NVARCHAR(100),
CipherText VARBINARY(MAX)
);
CREATE SYMMETRIC KEY CCKey
WITH ALGORITHM = TRIPLE_DES ENCRYPTION
BY CERTIFICATE TestCertificate;
-- must open the key if it is not already
OPEN SYMMETRIC KEY CCKey DECRYPTION BY PASSWORD = N'Password';
-- declare and set varible @str to store plaintext
DECLARE @str NVARCHAR(100)SET @str = N'Hello DESX';
-- encrypt @str and store in Table
INSERT INTO SymmetricTable (PlainText, CipherText)
VALUES ( @str, EncryptByKey(Key_GUID('CCKey'), @str));
-- select data from TempTable
SELECT * FROM SymmetricTable;
-- decrypt CipherText column and display it
SELECT CONVERT(NVARCHAR(100),
DecryptByKey(CipherText)) AS PlainText
FROM SymmetricTable;
-- close the key and drop it
CLOSE SYMMETRIC KEY CCKey;

Creating DataFrames from CSV in Apache Spark

 from pyspark.sql import SparkSession spark = SparkSession.builder.appName("CSV Example").getOrCreate() sc = spark.sparkContext Sp...