Pages

Managing Transactions


When we're trying to update multiple tables, how do we make sure that our database is always going to be left in a consistent state. What happens if part of the operation succeeds and the other part of the operation fails? This is where managing transactions comes into play, and it is important to control all transactions, the different types of locking , and how we can ensure that we're going to limit our exposure to potential problems.

How a transaction works inside of SQL Server ?


Transactions are required because most of the time when we're performing operations inside of SQL having to modify data, we're going to be dealing with multiple tables. For an example we might need to delete a customer and all the orders that are associated with that customer and what happens if only part of that operation succeeds ? what happens if I delete only few orders of that customer ? How do we then have to go back in and fix that ? then you might notice that there are lot of possible problems inside the database when it comes to data integrity.

Transactions will control the entirety of an operation and ensure that each and every transaction going to succeed or everything is going to get rolled back should something fail. Transactions must meet an ACID test.

ACID


A - Atomic (Entire operation must succeed or the entire operation must fail as a single unit)
C - Consistent (Once the operation is complete, Database must be left in a consistent or in a valid state or if only part of that operation succeeds, everything was rolled back and now back in the same state in which we started.)
I - Isolated (No Other operations can impact my operation)
D- Durable (When the operation is completed, changes are safe)

SQL has a default of what's known as an automatic transaction, therefore SQL transactions do meet the atomic test. If something fail in the middle of insert, update or delete statement, maybe only one row is going to fail out, the entire operation will fail out.

ORA S’ Corner : ORA-00937 not a single-group group function


                                                       


Recently, I came across issue when trying to include both a group function and individual column expression.

Cause: A SELECT list cannot include both a group function and an individual column expression, unless the individual column expression is included in a GROUP BY clause.

Example: 






















Workaround:

You can Drop either the group function or the column expression from the SELECT list or else add GROUP BY clause that includes all individual column expressions listed.


Walkthrough ORACLE 11g: Configuring for Recoverability

In the previous article “Shortcuts to losing your dataexplained why organizations expect zero data loss and importance of backup solutions. Not only for data protection, Backups also used for data preservation and historical retention as well.  However the main purpose of backup and recovery is to restore a failed database.


There are two types of Archive log modes ;

  •  * NOARCHIVELOG mode
  •  * ARCHIEVELOG mode 


NOARCHIVELOG is default mode and database runs normally, in this event you have to shut down the database in order to take the backup also which backups and recover to the point of the last backup only. Therefore before take the backup configure your database in ARCHIVELOG mode, because it allows backup the database while it is open. Also if backups are taken in ARCHIVELOG mode can be used to recover a database to past point in time.


Walkthrough Practice – Configuring ARCHIVELOG mode

 

1. Setup the Environment


SET ORACLE_SID=orcl

2. How to determine the archive mode

SQL> archive log list 


Blending Notes: Oracle 11g & SQL Server 2012-The structure of a table


There are multiple ways of retrieve table information. Few days ago I was desperately looking for an Equivalent Command to Oracle command of 'DESC' OR 'DESCRIBE' in SQL SERVER because I badly wanted to get some information about an existing database. This post mainly focus on describing MS SQL Server table using sp_columns and ‘DESCRIBE’ command of Oracle which retrieves column information for the specified table.
For an instance, Employee table includes information like table name, table owner, details about columns, and its physical storage size information referred to as metadata.
These kind of information of a table can retrieves by using the DESCRIBE command of Oracle.





Syntax- Oracle:
  • ¿       DESC[RIBE] <SCHEMA>.tablename


DESCRIBE key word can be shortened to DESC and schema can be omitted. 


Describing EMPLOYEE table using DESCRIBE command in Oracle 

 Syntax- SQL SERVER:
  •      EXEC sp_columns @table_name = ‘EMPLOYEE’;


Above catalog stored procedure returns column information for the EMPLOYEE table also has ability to specifically mention the object owner of the table as well. If you want to retrieve only one column of catalog information you can simply specified as follows,


Walkthrough SQL Server 2012: AdventureWorks


Where did I go wrong with Adventureworks ? 

Adventureworks sample database launched with SQL Server 2012. Earlier we used Northwind and Pubs sample databases however Adventureworks has become a vital aspect of learning new features within SQL Server 2012 and you can downloaded from CodePlexsite(http://msftdbprodsamples.codeplex.com/releases ). 

Every so often I run into something that seems so simple that should just work but doesn’tL .  I tried to attach the Adventureworks sample database (AdventureWroks2012 Data File) using SQL Server Management Studio as follows, 

® Right Click on Databases  
® Attach 
® click Add 
® select the AdventureWorks mdf file


SQL*Loader

SQL*Loader

Conventional path
Direct path

·         Uses the database buffer cache
·         Avoid the database buffer cache
Method of Saving Data

Loads use SQL processing and a database COMMIT operation for saving data. The insertion of an array of records is followed by a COMMIT operation. Each data load may involve several transactions.
·         Use data saves to write blocks of data to Oracle data files. 

·         faster than the conventional path

The following features differentiate a data save from COMMIT:
         During a data save, only full database blocks are written to the database.
         The blocks are written after the high-water mark (HWM) of the table.
         After a data save, the HWM is moved.
         Internal resources are not released after a data save.
         A data save does not end the transaction.
         Indexes are not updated at each data save.


Logging Changes

generates undo and redo data
(Always generates redo entries )
------------------------------------------------
loading generates redo entries that are similar to any DML statement


No undo data is generated, Ability to switch off the redo
Generates redo only under specific conditions


Enforcing Constraints

All enabled constraints are enforced in the same way that they are during any DML operation.
Use INSERT statements
(Enforces all constraints)

Table is locked for DML

Only below constraints can be enforced;
·         UNIQUE
·         PRIMARY KEY
·         NOT NULL

Firing the INSERT Triggers

Fires INSERT triggers

The WHILE INSERT triggers are fired during conventional loads
Does not fire INSERT triggers

They are disabled before a direct path load and re enabled at the end of the load
Loading into Clustered Tables

Clustered tables can be loaded with conventional path loads only
Direct loads cannot be used to load rows into clustered tables
Locking

Allows other users to modify tables during load operation

Prevents other users from making changes to tables during load operation

While a direct path load is in progress, other transactions cannot make changes to the tables that are being loaded. The only exception to this rule is when several parallel direct load sessions are used concurrently.

Shortcuts to losing your data

                 

Please stand by..

We are Experiencing Technical Difficulties !!!


Loss of DATA can break a business.It will effect to the business objective and that can be influence into organisation, information system and security program. Threats have a significant impact on Databases and it can be physical or non physical threats. Data can be misplaced because of hardware failures, data can mistakenly delete from user, data corruption, and environmental disruption.


According to research by the University of Texas, “Only six percent of companies suffering from a catastrophic data loss survive, while 43 percent never reopen and 51 percent close within two years.” Further it state that, some of the organisation never recover from major losses of data. Jim Martin, the chief operating officer with Worldport Ireland state that, majority of data in organisations is not stored in central repository off site. World Trade Center is one of the convenient locations for business and most of the financial and investment organisations are located, which was destroyed in the attack on 2001. Hardware failures, breakdowns are almost natural incidence on occasion, users are no longer able to access the data on the databases and it has major impact on large amounts of corporate data. These kinds of situations emphasise the importance of proper disaster and recovery plan. Also importance of maintain a remote site replication of data is an efficient means of protection.


1.     User Error

Human error causes most data loss; Erroneous updates, delete from database or can be overwriting. Deleting the contents of a table and dropping database objects can mistakenly happen. 

2.     Data Corruption

Data can be corrupt because of the Software bugs or Virus attacks. 

3.     Media Failure

A failure within the electronic circuits or hardware components may cause for data loss.

4.     Disaster

Data can loss because of natural event or human behavior such as Terrorist attacks. It can be electrical Failures or complete loss of a power to a geographic area because of weather related (earthquakes, tsunamis) or other causes of power outage.


Oracle and SQL server both provide strong recoverability features; whereas in some aspects superior to the other. ORACLE supports incremental backups which is a strong option when managing large scale databases, where this has to be accomplished through transaction log applying in SQL server. A scheduled oracle database with full, differential and incremental backups would guarantee 100% recoverability along with the support of Data guard. Database mirroring is equivalent to oracle’s data guard, but important features oracle offer is yet unavailable in SQL server database mirroring,

Comparing Database snapshot with oracle’s flashback, oracle stands way up high. Oracle flashback is as easy as issuing a query where in SQL server snapshot has to be created and then look for the changes.
Data corruption and human error causes most data loss, in this scenario it is important to back up data from database since there is no option other than to recreate the data. Data can be corrupt because of the virus attacks or bugs however it can be directly effect to entire database, its associated table or few files.


In the event of a disaster, Property, computer equipment can be caused and valuable data can be loss.  Recovery from this type of data loss requires hardware redundancy and mirroring. There should be a proper disaster recovery plan for database with manageable process. Because of the large quantity of data organisations should ensure the backup strategy and backup medium is stored at an offsite location.

Considering the options offered by ORACLE and SQL server it was evident that ORACLE is in the lead and rich in options. It offers smoother restoration, recovery, zero data loss and also easy recovery from user errors.