Monday, February 17, 2014

Oracle locks in table

Check locks in the oracle table:


SELECT l.session_id||','||v.serial# sid_serial,

       l.ORACLE_USERNAME ora_user,

       o.object_name,

       o.object_type,

       DECODE(l.locked_mode,

          0, 'None',

          1, 'Null',

          2, 'Row-S (SS)',

          3, 'Row-X (SX)',

          4, 'Share',

          5, 'S/Row-X (SSX)',

          6, 'Exclusive',

          TO_CHAR(l.locked_mode)

       ) lock_mode,

       o.status,

       to_char(o.last_ddl_time,'dd.mm.yy') last_ddl

FROM dba_objects o, gv$locked_object l, v$session v

WHERE o.object_id = l.object_id

      and l.SESSION_ID=v.sid

order by 2,3;

Saturday, September 7, 2013

Relational Database Management System:

             Dr.E. F. Codd(IBM Scientist), released article with  twelve rules  that a database must satisfy, if it is to be considered truly relational ,in Computerworld Magazine at 1985.C J Date is credited with releasing Codd’s work after his death.Before the arrival of RDBMS Rules,Vendors had secured market in Hierarchical, Network and Hybrid Databases.Because of Normalization,Vendors&developers showed more interest on the implementation of RDBMS. Based on these given twelve rules Oracle Corporation implemented Relational Database Management Server with the help of C language at very first.Later Sybase,IBM,Microsoft and other vendors developed relational Database Management Systems.Let's move on to the Rules with summarized information.

 Rule 0:


          Rule 0 forces the database to be relational and expects to satisfy the all below given rules.


Rule 1:Information rule.

The information rule simply requires all information in the database to be represented in one and only one way, Namely by values in column positions within rows of tables.

Rule 2:Guaranteed access rule


          All data should be accessible without ambiguity.This means each data item can be uniquely identified using the table name, primary key, and column name.

Rule 3:Systematic treatment of null values


         Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.

Rule 4:Dynamic online catalog based on the relational model


           The DBMS must provide access to its structure through the same tools that are used to access the data. This is accomplished by storing the structure definition within tables, the same as data is stored.

Rule 5:Comprehensive data sub-language rule


A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
  • data definition
  • view definition
  • data manipulation (interactive and by program)
  • integrity constraints
  • authorization
  • transaction boundaries (begin, commit, and rollback).
Relational databases use forms of standard SQL as their supported comprehensive language.

Rule 6:View updating rule


Data can be presented to the user in different logical combinations called views. Each view should support the same full range of data manipulation that direct-access to a table has available. (This is only partially fulfilled in current implementations since update and delete access through logical views conflicts with the common use of logical views to provide read-only access for query and reporting. There is no universally agreement that it is a good idea in any case.)

Rule 7:High-level insert, update, and delete


 Data can be retrieved from a relational database in sets constructed of data from multiple rows and/or multiple tables. This rule states that insert, update, and delete operations should be supported for any retrievable set rather than just for a single row in a single table.

Rule 8:Physical data independence


Applications must still work using the same syntax, even when changes are made to the way in which the database internally implements data storage and access methods. This rule implies that the way the data is stored physically must be independent of the logical manner in which it’s accessed. This is saying that users shouldn't be concerned about how the data is stored or how it’s accessed. In fact, users of the data need only be able to get the basic definition of the data they need. Other things that shouldn't affect the user’s view of the data are as follows:
Adding indexes: Indexes determine how the data is stored, yet the user, through SQL, will never know that indexes are being used.
Changing the file-group of an object: Just moving a table to a new file-group will not affect the application. You access the data in the same way no matter where it is physically located.
Using partitioning: Beyond moving entire tables around to different file-groups, you can move parts of a table around by using partitioning technologies to spread access around to different independent subsystems to enhance performance.
Modifying the storage engine: From time to time, Microsoft has to modify how SQL Server operates (especially in major version upgrades). However, SQL statements must appear to access the data in the same manner as they did in any previous version, only (we hope) faster.
Microsoft has put a lot of work into this area, because SQL Server has a separate relational engine and storage engine, and OLE DB is used to pass data between the two. Further reading on this topic is available in SQL Server 2008 Books Online in the “Database Engine Components” topic
or in Inside Microsoft SQL Server 2005: The Storage Engine by Kalen Delaney (Microsoft Press, 2006).

Rule 9:Logical data independence



              This rule insulates the user or application program from the low-level implementation of the database. Together, they specify that specific access or storage techniques used by the RDBMS—and even changes to the structure of the tables in the database—shouldn't affect the user’s ability to work with the data. In this way, if you add a column to a table and if tables are split in a manner that doesn't add or subtract columns, then the application programs that call the database should be unimpaired.
For example, say you have the table in Figure A-1.



The user should be unaffected. If you were to implement INSTEAD OF triggers on the view that had the same number of columns with the same names, you could seamlessly meet the need to manage the view in the exact manner the table was managed. Note that the handling of identity columns can be tricky in views, because they require data to be entered, even when the data won’t be used. See Chapter 6 for more details on creating INSTEAD OF triggers.
Of course, you cannot always make this rule work if columns or tables are removed from the system, but you can make the rule work if columns and data are simply added.

Rule 10:Integrity independence


Integrity constraints specific to a particular relational database must be definable in the relational data sub-language and storable in the catalog, not in the application programs. The database must support a minimum of the following two integrity constraints:
• Entity integrity: No component of a primary key is allowed to have a NULL value.
• Referential integrity: For each distinct non-NULL foreign key value in a relational database, there must exist a matching primary key value from the same domain. This rule says that the database language should support integrity constraints that restrict the data that can be entered into the database and the database modifications that can be made. In other words, the RDBMS must internally support the definition and enforcement of entity integrity (primary keys) and referential integrity (foreign keys).

Rule 11:Distribution independence


           This rule says that the database language must be able to manipulate data located on other computer systems. In essence, we should be able to split the data on the RDBMS out onto multiple physical systems without the user realizing it. SQL Server 2008 supports distributed transactions among SQL Server sources, as well as other types of sources using the Microsoft Distributed Transaction Coordinator service

Rule 12:Non subversion rule


          This rule requires that alternate methods of accessing the data are not able to bypass integrity constraints, which means that users can’t violate the rules of the database in any way
For most SQL Server 2008 applications, this rule is followed, because there are no methods of getting to the raw data and changing values other than by the methods prescribed by the database. However, SQL Server 2008 violates this rule in two places:
• Bulk copy: By default, you can use the bulk copy routines to insert data into the table directlyand around the database server validations.
• Disabling constraints and triggers: There’s syntax to disable constraints and triggers, thereby subverting this rule.It’s always good practice to make sure you use these two features carefully. They leave gapingholes in the integrity of your data, because they allow any values to be inserted in any column.Because you’re expecting the data to be protected by the constraint you've applied, data valueerrors might occur in the programs that use the data, without re-validating it first

Tuesday, October 16, 2012

Sybase to SQL Server MIgration Issues

Concatenating NULLABLE char column with NOT NULLABLE char column: 

                While concatenating the Nullable character column with the NOT Nullable column, Sybase right trimming the first column(Nullable character column) and then concatenating but SQL Server directly concatenating .So SQL Server gives the output along with the trailing spaces for the first column.
 

While concatenating the Nullable character column with the NOT Nullable column, Sybase right trimming the first column(Nullable character column) and then concatenating but SQL Server directly concatenating .So SQL Server gives the output along with the trailing spaces for the first column.Assume the table with two columns of nullable char(10) and not nullable char (10).
There is a record like below.
 When concatenating the two columns , SYBASE will gives the output without space like below.
SELECT column1+column2,* FROM dbo.test
 When concatenating the two columns ,SQL SERVER gives the output with space like below.
SELECT column1+column2,* FROM dbo.test
 Finally Sybase, right trimming the nullable character column while concatenation but not SQL Server.
Solution:
Resolve the above given scenario and get the same result as like as Sybase, it’s mandatory to add the rtrim() for the first column like below.
SELECT rtrim (column1) +column2,* FROM dbo.test
Don’t use ANSI Padding off to resolve this issue. This is not recommendable in future version.

Please click here to know about sybase to sql server migration issues 

Friday, October 5, 2012

SSMA error message S2SS0046


CONVERT or CAST expression from binary to integer might cause a problem because of platform dependence



                  Comparing binary value with Timestamp column was different between Sybase and SQL server. The below code was working fine in Sybase but not in SQL server
Where timestamp = substring(CONVERT(varbinary (16), CONVERT(INT, @row_upd)), 5, 8)  where @row_upd was varchar(38 datatype)

To handle this the same code has been rewritten using Dummy value concatenation

0x00000000+substring (0x00000000 +CONVERT (varbinary(8), CONVERT(INT, @row_upd')),5,8)
 

Sybase to SQL Server MIgration Issues

Group by statement:

In Sybase, Group by statement doesn’t require only the column in group by clause to be placed in select list in Sybase. But in SQL server it is mandatory that both select list columns and group by columns should match or select list columns should be the subset of the columns in group by clause.
Replace the statements with Min () aggregate function for the columns which is not part of the group by list.
Please click here to know about sybase to sql server migration issues 

Sybase to SQL Server MIgration Issues


Avoid using Emulated function:


         When the migration of objects by Using SSMA, emulated functions would have created in SYSDB by SSMA and those functions will be referred to replace the SQL Server system functions in the database objects. These emulated functions, may affect the performance of the query.
To avoid the performance impact, should use the below changes. Emulated function call has been replaced with actual statement change using case statement (Like If @start <0 for substring) or by making necessary platform specific parameter alteration (Like Proper SQL server specific  datepart ).

Please click here to know about sybase to sql server migration issues   

Sybase to SQL Server MIgration Issues

Remote Procedure call:

Remote procedure call to other  Sybase ASE Db located remote server called through four part name. Example of the remote procedure call given below.
Exec  remoteserver.dbname.schemaname.procname
In SQL Server the same statement cannot execute.To replicate the same scenario in SQL Server, Remote Sybase ASE server has been added as linked server and the four part reference has been modified as
Exec (dbname.schemaname.procname) at remoteserver

Please click here to know about sybase to sql server migration issues  


Sybase to SQL Server MIgration issues


Replace Tempdb tables with ##tables

                          Temp db Tables behavior is different between Sybase and SQL Server. In Sybase such tables exist till end of the owner session or explicit drop but in SQL server it will exist till explicit Drop. In Sybase Temp table created like tempdb.schema.temptable. To achieve same functionality Global temp table should be used so that the temp table is visible across all Sessions and Exist till end of the owner session or explicit drop. In SQL server temp table created like ##temptable. Preferably drop temp tables like below. I have placed Sybase equivalent SQL server scripts for create and dropping tables.




Please click here to know about sybase to sql server migration issues 

Sybase to SQL Server MIgration Issues

Binary to Numeric Conversion fails in SQL Server

                    Converting binary value to Numeric is possible in Sybase, but not possible in SQL Server. Instead of, we can convert the Binary value into Integer. This is not exact solution for replicate Sybase scripts.But we can adjust some scenarios with this changes.

Please click here to know about sybase to sql server migration issues  

Sybase to SQL Server MIgration Issues


Issue for Inserting Timestamp

If we try to insert the actual timestamp data from Sybase table to SQL Server, we are getting error with ‘Cannot insert an explicit value into a timestamp column. Use INSERT with a column list to exclude the timestamp column, or insert a DEFAULT into the timestamp column. Severity 16’.We intent to create same database copy to SQL Server based on Sybase with all data. But this above given error stopping the data insertion. I have attached the screenshot for the direct insertion along with error.









To avoid the above given error along with data migration, Map timestamp to binary (8) type for table columns. During SSMA conversion, add a new column named ssma_timestamp that has a Transact-SQL timestamp type. Add the default @@DBTS for the old timestamp column to ensure that this column receives proper timestamp values after the migration finishes. The ssma_timestamp column is necessary to advance the @@DBTS value after the row is update. We should change the timestamp to ssma_timestamp for the required places of procedures, functions, triggers and other objects. Usually these changes will be in the select statement of the scripts. Please find the screenshot below with the required changes.



































Please click here to know about sybase to sql server migration issues 

Wednesday, October 3, 2012

Sybase to SQL Server MIgration Issues

Nullability difference in Table creation:

           While creating table in Sybase, default nullability of the column is Not null.But in server server the default nullability of the column is Null. That means if the developer did not specify the nullability of the column while creating table,sybase defines the column as Not Nullable and SQL Server defines the column as Nullable.
You can see the example for the default nullability value in the right side of the screenshot for the corresponding database.

Sybase:






SQL Server:
 




            While migrating ‘create table statement’, we should give the NOT NULL for the columns doesn’t have nullability in the column definition.



Please click here to know about sybase to sql server migration issues 

Friday, September 28, 2012

Sybase to SQL Server Migration Issues

Variable Scope/behavior in Update statment:

                   In Sybase intermediate calculation with variable for the update statement,Variable value is based on last executed statement or last executed record.In Sybase,variable updation will happen after the column updation.Which means the updation in variable has low priority when compare with the updation in column  for the single update statement.If we think the output of the updated variable want to become the input of the column updation for each and every row,we cannot attain in Sybase.Because the update in column will execute first and then update in variable execute in single update statement even we placed left or right side.

But in SQL Server we can perform the above operation.The variables are highly dynamic and record dependent.The priority for the variable updation is in high priority.So we can get consistent result.

If you see the below given screenshots you can understand the scenario
which i am trying to explain.

Variable behavior in Sybase Update statement




Variable behavior in SQL Server Update statement


































             I just shared the difference between two technologies for the update statement while using intermediate variable.Hopefully we won't get such a scenario from Source.




















Please click here to know about sybase to sql server migration issues 

Sybase to SQL Server Migration Issues

Data truncation while Insertion

                 The behavior of Sybase and SQL Server is different in the Insert statement.
When we trying to insert additional characters when compare with actual column length,
Sybase automatically truncates the excess characters and inserts.
But SQL Server will raises the below given error. 

Msg 8152, Level 16, State 14, Line 2
String or binary data would be truncated.
The statement has been terminated.

                 If we are trying to insert the more than 10 character for the column which is having
length 10,Sybase will removes the characters from 11th position and inserts the first 10 character
into the table.But SQL Server raises the error.You can understand easily with the
following Screenshots given below.

Sybase,Inserting too many characters than actual column length
  
SQL Server,Inserting too many characters than actual column length




























Please click here to know about sybase to sql server migration issues 

Sybase to SQL Server Migration Issues

 Default Date/Datetime format in Sybase Vs SQL Server:

               Default Date/Datetime Format is different for both Sybase and SQL Server.
So we need to change the target system’s(SQL Server) date/datetime format,
based on the source system’s output.I am giving the default datetime formats below.

Sybase Default Date Format:
              MON DD YYYY H: HH:MM: SSS AM

SQL Server Default Date Format:
              YYYY-MM-DD HH:MM:SS

Please click here to know about sybase to sql server migration issues

Thursday, September 27, 2012

Cursor behavior in SQL Server vs Sybase


Objective is to be, if one cursor’s input is based on the other cursor. We should execute the input cursor with prior and input cursor’s script want to come first in the procedure
Cursor interdependency based on parameter supported by Sybase, even the inner cursors declare statement placed next of the outer cursors declare statement. But Cursor interdependency based on parameter not supported in SQL server, if the Cursor declaration is not done inside the outer cursor’s open statement. We will get the different result set for the same logic code based on the Sybase and SQL Server. If we made the code changes to declare the Inner cursor inside the Outer cursor (between Open and close block), we can get the same result set. I have given detailed explanation below along with screenshot.

Sybase Script with Cursor Inter-dependency



















Here first cursors output is the input for the second cursor. But the declaration of the second cursor located before the first cursors open statement. Though Sybase maintains the Inter-dependency between the cursor and passes the first cursor’s output to next cursor’s input. So the Sybase output is based on the two cursor’s input and output. You can see the output of the Sybase below.
One more is, Cursor de-allocation handled by Sybase engine automatically. But In SQL Server we should give the de-allocation explicitly for the required places.

Sybase Output for the above given sample code


















To achieve the same result set in SQL Server we should declare the Second (Inner) Cursor declaration and open statement inside the First (Outer) Cursor’s Open statement. Otherwise SQL Server engine will gives the NULL value for the @var2 and only give the below result set.

SQL Server output with bug 










SQL Server Script with relevant changes 















Here we given De-allocate statement for each cursor explicitly for the required places .Outer cursors output will simply flow as a inner cursor’s input. If we made the code changes like above we can get the same result as like as Sybase. Once again I attached actual result for your reference.

SQL Server Output with relevant changes 































Please click here to know about sybase to sql server migration issues