Configure SQL Server Database Mirroring Using SSMS

My test environment consists of two separate VM’s running VM Workstation with Windows 2008 R2 Datacenter Edition and SQL Server 2008 R2 Enterprise named appropriately Principal and Mirror. The SQL Server and SQL Server Agent Services accounts are running as domain users (DOMAIN\User). Windows Firewall is OFF for the sake of this example.

I created a database on the Principal SQL Server instance and named it TestMirror. The recovery model is set to FULL RECOVERY.


1st step: Issue a full backup of the database.

BACKUP DATABASE TestMirror TO DISK = ‘C:\Program Files\Microsoft SQL

Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Backup.bak’;

2nd step: Issue a transaction log backup of the database.

BACKUP LOG TestMirror TO DISK = ‘C:\Program Files\Microsoft SQL

Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Backup.trn’;


Below are the two files in the file system:


3rd step: Assuming you have the backup folder shared on the Principal Server and you can access it from the Mirror Server, you will need to restore the full backup to the Mirror server with the NORECOVERY option.

RESTORE DATABASE TestMirror FROM DISK = N’\\Principal\Backup\Backup.bak’

WITH FILE = 1, MOVE N’TestMirror_log’ TO

N’C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestMirror_1.ldf’,

NORECOVERY, NOUNLOAD, STATS = 10;

4th step: Restore log backup also with the NORECOVERY option.

RESTORE LOG TestMirror FROM DISK = N’\\Principal\Backup\Backup.trn’

WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 10;

 


Now it’s time to dig down and configure Database Mirroring. From the Principal server, right click the database and choose “Tasks” | “Mirror” or choose “Properties” | “Mirroring”.


Click the “Configure Security” button and click “Next >” if the Configure Database Mirroring Security Wizard intro screen appears. The next screen should be the Include Witness Server screen:


This is where you would configure a witness server for your mirroring, but since we’re just configuring a basic mirror we will skip this part. However, if you are configuring mirroring in an Enterprise environment it is recommended you configure a witness server because without one you will not have synchronous automatic failover option.

Select “No”, then click “Next >” to continue the process.

The next screen will give you options to configure the Principal Server Instance:


Here we will be creating our endpoint, which is a SQL Server object that allows SQL Server to communicate over the network. We will name it Mirroring with a Listener Port of 5022.

Click the “Next >” button to continue.

The next screen will give you options to configure the Mirror Server Instance:


To connect to the Mirror server instance we will need to click the “Connect…” button then select the mirror server and provide the correct credentials:


Once connected, we also notice our endpoint name is Mirroring and we are listening on port 5022.

Click “Next >” and you’ll see the Service Accounts screen.


When using Windows Authentication, if the server instances use different accounts, specify the service accounts for SQL Server. These service accounts must all be domain accounts (in the same or trusted domains).

If all the server instances use the same domain account or use certificate-based authentication, leave the fields blank.

Since my service accounts are using the same domain account, I’ll leave this blank.

Click “Finish” and you’ll see a Complete the Wizard screen that summarizes what we just configured. Click “Finish” one more time.


If you see the big green check mark that means Database Mirroring has been configured correctly. However, just because it is configured correctly doesn’t mean that database mirroring is going to start…

Next screen that pops up should be the Start/Do Not Start Mirroring screen:


We’re going to click Do Not Start Mirroring just so we can look at the Operating Modes we can use:


Since we didn’t specify a witness server we will not get the High Safety with automatic failover option, but we still get the High Performance and High Safety without automatic failover options.

For this example, we’ll stick with synchronous high safety without automatic failover so changes on both servers will be synchronized.

Next, click “Start Mirroring” as shown below.


If everything turned out right, Database Mirroring has been started successfully and we are fully synchronized.


 

If Database mirroring did not start successfully or you received an error here are a few scripts to troubleshoot the situation:

Both servers should be listening on the same port. To verify this, run the following command:

SELECT type_desc, port

FROM sys.tcp_endpoints;

We are listening on port 5022. This should be the same on the Principal and Mirror servers:


Database mirroring should be started on both servers. To verify this, run the following command:

SELECT state_desc

FROM sys.database_mirroring_endpoints;

The state_desc column on both the Principal and Mirror server should be started:


To start an Endpoint, run the following:

ALTER ENDPOINT <Endpoint Name>

STATE = STARTED

AS TCP (LISTENER_PORT = <port number>)

FOR database_mirroring (ROLE = ALL);

ROLES should be the same on both the Principal and Mirror Server, to verify this run:

SELECT role

FROM sys.database_mirroring_endpoints;

 


To verify the login from the other server has CONNECT permissions run the following:

SELECT EP.name, SP.STATE,

CONVERT(nvarchar(38), suser_name(SP.grantor_principal_id))

AS GRANTOR,

SP.TYPE AS PERMISSION,

CONVERT(nvarchar(46),suser_name(SP.grantee_principal_id))

AS GRANTEE

FROM sys.server_permissions SP , sys.endpoints EP

WHERE SP.major_id = EP.endpoint_id

ORDER BY Permission,grantor, grantee;

 


You can see here from the State and Permissions column that the user has been Granted Connect permissions.

SQL Server 2008 Compression on SAP

Storage is cheap, right? Browsing through my local hardware reseller’s web pages, I can see that I can buy disks at some $100 per Tb. The problem is that enterprise storage isn’t cheap, especially when you outgrow what you have. Also remember that a 1Tb production system is also a 1Tb dev, qa, training etc. system (after a few system copies) and so 5Tb of production, 5Tb of backups, etc. Data propogation causes huge increases in total storage requirements.

On our demo landscape, we have some 5Tb of enterprise class storage for our SAP systems, totalling about 29 virtual servers. We’ve run out of space and are considering a substantial capital investment to go up to the next tier of enterprise storage – 3 or 4 times what we paid for the existing storage.

What’s more we live in a world where a global recession has taken a big hit, and where issues around sustainability means that buying ever bigger and better hardware isn’t palatable. Virtualizing our SAP landscape saved us huge amounts of money, but the consolidation effect means that when you run out of space, the effect is much more costly.

All this means, that we had to think in terms of saving space first, and capital procurement second. Also, most SAP systems are I/O bound these days anyhow.

I remember a story from the early days of the search engine Altavista, which was of course funded by DEC and ran on DEC AlphaServers. The story goes that they were running out of CPU and had ordered some more AlphaServers. Mike Burrows (who wrote the indexer) rewrote his code over the weekend and traded 30% CPU for 30% storage. When the AlphaServers arrived on Monday morning, they weren’t required.

I’ve been pondering what the benefits of database compress on MSSQL 2008 are, and when I met with some platform experts from Microsoft last week and heard how they were getting on, I was really intriqued.

DB2 has had row compression for some time and part of their value proposition over MSSQL and Oracle is 30% reduced database size. MSSQL introduced vardecimal compression in SQL 2005 but it its ability to compress was limited to certain objects (principally BW cubes benefitted the most).

With SQL 2008 this all changes. I’ll talk you through the different types of compression available in Microsoft SQL Server 2008 and what benefits they brought our BW production system.

To start with, our productive environment was 92,372Mb. It’s pretty well maintained and we try to keep the size down with archiving, data deletion and good maintenance and it was some 120Gb+ a year ago.

Note!!! If you are going to do any of this, then read SAP Note 991014 – Row Compression/Vardecimal on SQL Server. You do any of this at your own risk. Obviously don’t do this in a productive system before proving it in a non-productive environment.

1) Row level compression

The word “Compression” is a misnomer and this doesn’t do any compression. This mechanism only works on tables with numbers in them (decimal columns). MSSQL then stores decimal data not as fixed size fields, but just using the amount of space it requires. Specifically, the value “0” only requires 4-bits of space.

Note that on new installations of SAP, this is already the default option. You will see this in the analyse report below, if that is the case.

To do row compression make sure you download the latest version of the transports attached to Note 991014, then run ABAP report RSDD_MSSQL_CUBEANALYZE. Select the 100,000 largest tables and “Use Row Compression”, and click “Start Checks”. You can refresh the screen till its done – it should take 5 minutes, so go and get a cup of tea. When “CheckJobActiv” says no, it will show you a list of tables and their sizes. Click in the middle and select Ctrl-A to select all rows, then click “Compress Selected Table(s)”. You may as well do all the tables, although the biggest benefit comes from the top 100.

This will take several hours on a small system and maybe much longer on a bigger system. It will also generate a lot of transaction logs, so you may want to change your transaction log style to BULK LOAD to reduce these. In our system I just ran it at a quiet time. On large productive systems you may need to run it in smaller chunks, during your support intervals.

So we expect row level compression to work well on some tables and not on others, and this is exactly what happens. There are 2 types of table that really benefit, the RS* tables (which contain information about logs) and BW cube/data store tables (which are largely numeric). On these tables we got 40-60% compression, e.g. 1068Mb down to 524Mb, 52%, for RSBATCHSTACK.

Tables like PSA tables and dictionary tables don’t fare so well, for example 6037Mb to 5575Mb, 8%, for D010TAB

In total this reduced our database size from 92372Mb to 77208Mb, or a saving of 16%, without a net reduction in performance.

The neat thing about row compression is that it doesn’t affect performance and this was born out in our productive environment – queries and data loads are no slower.

2) Page compression

Page compression is a whole different thing, and it takes common keys in tables e.g. Company Code/Client, and replaces them with tokens. Thus this will work on all tables. However it holds a CPU overhead of 20-30%. I happen to know that CPU on our VMware farm runs at 30% (but we are I/O bound), so a bigger reduction in database size for the tradeoff of some extra CPU cycles, is worth it for us. You have to decide this based on your situation. Clearly if you do this and you are already CPU bound, you will be in a worse sitation.

You have to enable Expert Mode by going to RSDD_MSSQL_CUBEANALYZE, then going to Settings -> Export Mode on/off. This will give you a new button “Page Compression”. Compress the tables in the same way.

Note – tables which contain transient data, like qRFC tables, shouldn’t be page compressed. Here’s a few to get you started: VBDATA, VBHDR, VBMOD, ARFCSTATE, ARFCSDATA, TRFCQDATA, TRFCQIN, TRFCQINS, TRFCQOUT, TRFCQSTATE, ARFCRDATA, ARFCRSTATE – but you may have more in your system. Enabling Page Compression won’t do any harm but will waste CPU cycles and not save any space on those tables.

It’s clear immediately that the impact is massive with Page Compression. Some DSO tables are shrunk from 2884Mb to 269Mb (91%). Even the hard to compress tables like D010TAB are shrunk from 6037Mb to 4162Mb (31%).

In total we reduce database size from 93272Mb to 51061Mb (45%). Our largest dataloads still run in the same time that they did before, so the reduction in I/O has matched the increase in CPU cycles, as expected.

3) Index compression

What I noticed at this point was that the data was taking up a lot less space than the indexes. For example, on table D010TAB, it used to be 2480Mb for the data and 6037Mb total. Now it’s 662Mb for data and 4162Mb overall. This means that whilst the data is compressed, indexes don’t get any smaller and are a much larger part of the database size than before.

MSSQL 2008 supports index compression as well but it is not yet supported by SAP. The reasons for this are slightly unclear but it can apparently cause dumps during bulk operations like support package stacks, BW Change Runs etc. Expect a patch for this soon.

In the spirit of this, I decided to play with Index Compression in our Development environment which is a recent copy of Production. Go back to RSDD_MSSQL_CUBEANALYZE, click the Index Compression button and do another compression run. This warns you that this is unsupported…

The impact of this is once again massive. D010TAB goes down from 6072Mb to 1915Mb total (68% compression). RSBATCHSTACK down from 1009Mb to 55Mb (95% compression). Note the numbers are a bit different, I haven’t done this in Prod!

Total database size is 27,647Mb down from the same 92,372Mb, which is an amazing 70% compression!

4) UCS2 compression (New in Microsoft SQL server 2012)

I’ve not done this yet but MSSQL 2012 has compression for Unicode data. This promises to halve the size of certain types of Unicode data sets. I’ll update this blog when I get my hands on the copy of the RTM version of MSSQL 2012.

5) Umm…How do I get my disk space back?

So I now have set of systems that I’ve compressed from 90Gb to 50Gb. Unfortunately, the data files haven’t shrunk and I have lots of free space, for much more than the growth I need. My filesystem is 160Gb, Database files are 90Gb and Data size is 50Gb! Remember, the purpose of this exercise was to liberate real disk space!

There are two options to do this, the first is to do an export/import and the second, to do a DBCC SHRINKFILE.

Purists don’t like DBCC SHRINKFILE because it can cause fragmentation and confuse RAID controller read-ahead policies. Microsoft tell us (see the MSSQL blog) that the impact of this is much less than in days gone by, because they have improved their algorithms, and so have the RAID controller people and therefore recommend DBCC SHRINKFILE.

I’m quite interested because my database was SQL 2000, upgraded to 2005 and then 2008 over a 5 year period. This was one of the first BW 7.0 installations in the world and it’s been through the wars. Soon it will be upgraded to 2010.

For me the question is whether an export/import will clean the DB up and reduce fragmentation in a meaningful way, thereby improving performance. So for now, I’m going to do the DBCC SHRINKFILE and cut the size of my DB files. At some later stage I’ll do an export/import and we’ll see if that reduces space further, and increases performance.

Conclusion

Microsoft SQL Server 2008 brings real benefits to the size of SAP databases without any material performance impact in our environment. In most SAP environments, which are I/O bound, this brings a win/win situation to SAP customers running Microsoft platforms. Get on and do it! I’m about to send a change request to our Basis team to do this to our entire SAP landscape and save 1.5Tb of disk space.

You can download this document from SAP SDN:
Using SQL Server Database Compression with SAP NetWeaver

Importing the transport of MSSCOMPRESS

If the report MSSCOMPRESS was not already imported into your SAP system by a basis support package, you have to import it manually. This should be done by the SAP basis administrator. First of all you have to unzip the cofile (Kxxx.SID) and data file (Rxxx.SID) of the transport, which is attached to SAP note 1488135. Then copy the files to the cofiles (usr\sap\trans\cofiles\) and data directory (usr\sap\trans\data\). The next step is to add the request to the import queue in SAP transaction STMS.

 

 

 

 

 

 

 

 

Use the F4 help for field Transport Request to choose your transport.

 

 

 

 

 

Select the Request (YI3Kxxx) and press the button “Import Request”

 

 

For SAP releases 720 and newer you may have to choose the import option “Ignore Invalid Component Version”.

 

 

 

 

 

 

 

 

 

Dependent on the SAP release, this procedure might be slightly different. Your SAP basis administrator should be aware of the details.

Known issues

When pressing the “Compress” button, you may see the following warning:

 

 

 

 

 

You can simply ignore this warning by pressing OK. It has no impact at all. The root cause of this warning is not an issue of the report MSSCOMPRESS. It is a known bug in the SAP Kernel and fixed by SAP note 1309615. After applying the newest SAP kernel patch you will no longer see this warning.

 

Compressing an SAP database using report MSSCOMPRESS

SAP released the report MSSCOMPRESS in SAP note 1488135. The report can be used to compress existing tables and indexes using the data compression feature of SQL Server 2008 (and all newer releases). You can choose between the compression type NONE, ROW or PAGE for the data (heap or clustered index) and the (non-clustered) indexes. MSSCOMPRESS performs the compression in dialog or using an SAP batch job. MSSCOMPRESS is particularly useful for two scenarios:

  • Compressing (non-clustered) indexes after activating the index compression support for SAP as described in SAP note 1488135
  • Re-compressing a SAP Unicode database in order to benefit from the improved Unicode compression of SQL Server 2008 R2

Prerequisites

If the report MSSCOMPRESS was not already imported into your SAP system by a basis support package, you have to import it manually. The details are described below.

MSSCOMPRESS can only be used for SQL Server 2008 or newer. You need the same privileges to use MSSCOMPRESS as you need for the SAP database utility SE14 (authority object S_DEVELOP).

Before using MSSCOMPRESS you should implement the correction instructions of SAP note 1459005. This will activate the index compression support in the SAP data dictionary. As long as the SAP data dictionary is not up-to-date you can only compress the data (heap or clustered index) using MSSCOMPRESS.

 

 

 

Once you have applied SAP note 1459005, you can compress data and (non-clustered) indexes using the report MSSCOMPRESS.

 

 

 

The default compression type for report MSSCOMPRESS is taken from the default compression type of the SAP data dictionary. The index compression type is in sync with the data compression type. However, you can choose the index compression type manually by checking the according checkbox:

 

 

 

Starting MSSCOMPRESS

Use SAP transaction SA38 or SE38 to start the ABAP report MSSCOMPRESS.

To get a brief description of MSSCOMPRESS, press the “i” button or choose “Application Help” from the “Help” menu. In the following, MSSCOMPRESS is described in detail.

 

 

 

 

 

 

 

 

 

 

 

 

In the lower section of the screen, a list of all database tables of your SAP ABAP system is displayed. The list does not include tables of an installed JAVA schema, since the data dictionary for JAVA does not support compression. There are SAP JAVA solutions which have huge database tables, for example the SAP Enterprise Portal (EP). However, almost all data in an EP system is stored in fields of SQL Server data type IMAGE or VARBINARY(MAX). These data types cannot be compressed in SQL Server 2008 (R2) anyway.
The list of tables displayed in MSSCOMPRESS contains the number of indexes, the data/index size and the data/index compression types for each table. The tables in the list are sorted by total size (data + index size). The number of indexes does not contain the clustered index. A clustered index (or primary key constraint) is considered to be DATA and all non-clustered indexes are INDEXES. The data compression type can be “ROW”, “PAGE” or “none”. For a partitioned table you may see a combined compression type, for example “ROW, PAGE”. This is the case, if some partitions are row compressed and some partitions of the same table are page compressed. The same applies for the index compression type. When a table does not have any non-clustered index, then the number of indexes is shown as 0 and the index compression type is empty (“ “).

Workflow Overview

To start the compression you have to perform 3 steps:

  1. Choose the tables you want to compress (by filtering and selecting)
  2. Choose the options (compression types and runtime options)
  3. Choose the type of compression run (dialog or batch)

Compressing huge tables may result in high resource consumption (CPU, I/O, temporary data space, transaction log space, blocking database locks). Therefore you should make sure that sufficient space is available during the compression using MSSCOMPRESS. Furthermore, the compression should not run during peak hours. It is recommended to use the SQL Server recovery model “Bulk-logged” during compression in order to minimize the transaction log space needed.

You can also compress the SAP database in chunks. For example, you may choose the 10 largest non-compressed tables using the filters in MSSCOMPRESS and schedule the compression for the weekend. The other tables can then be compressed during the next weekend(s).

Choosing the tables

To choose the tables, you want to compress, you can first filter the list of tables and then select the tables within the filtered list. Applying a filter modifies the displayed list of tables. You can apply multiple filters at the same point of time. In the following example all four possible filter types are applied:

 

 

 

After applying these filters, the list contains the 10 largest tables, whose name starts with “/BI”, data compression is “ROW” and index compression is “none”. You can then further select the tables within the filtered list by clicking on them. Use the SHIFT and CTRL keys while clicking in order to expand your selection. In this example, three tables are selected:

 

 

 

 

 

 

Choose the empty entry from the pop-up menu in order to remove a filter again. The name filter works a little bit different. You can enable and disable the name filter using the check box. Per default, the filter criteria is “/BI*”, which filters all SAP BW tables. After changing the filter criteria you have to press the ENTER key (or the ENTER button on the right-hand side of the filter criteria).

 

 

 

 

Choosing the options

The compression options define the desired compression type of the tables after the compression run. Do not confuse them with the filter options “Data Compression Type” and “Index Compression Type”. They are used to choose the tables, which should be compressed. The filter options apply to the compression type of a table before the compression run.

You can choose the desired data and index compression type using the radio buttons. The SQL statement used for compression is
ALTER [TABLE| INDEX] … REBUILD WITH (DATA_COMPRESSION = [ROW | PAGE | NONE] …)
Before actually compressing, MSSCOMPRESS checks the current compression type for each table and index. If the current compression type fits the desired compression type then nothing is done for the concerned index. However, you can force to compress these tables and indexes independent from the current state by choosing the according checkboxes (“Force Data Rebuild” and “Force Index Rebuild”). These options are needed, if you want to compress an already compressed table again. This is particularly useful after upgrading SQL Server 2008 to SQL Server 2008 R2, in order to benefit from the compression improvements for Unicode strings.

 

 

 

 

MSSCOMPRESS automatically removes the vardecimal storage format of a table when compressing it. You may change this in the menu “Goto” => “Advanced Options”. However, we do not see the advantage of changing this default behavior.

The compression options have an impact on the result of the compression run. There are two options which have an impact on the compression run itself:

 

 

You can configure the maximum number of threads used to execute a single SQL statement with the SQL Server configuration option max degree of parallelism. For an SAP system, this is typically set to 1. To overwrite this configuration for the compression run, set the checkbox “MAXDOP” and enter a value greater than 1. MSSCOMPRESS compresses table by table and index by index. It does not compress multiple tables at the same point in time, even of “MAXDOP” is set. If you want to compress many tables in parallel then you should schedule a few compression runs of distinct tables for the same period. The benefit of parallel compression runs depends on your hardware resources.
When setting the checkbox “Use ONLINE”, the database compression does not acquire table locks, which results in better concurrency. However, this does not work for some tables, for example tables having a text or image field. If you have set the checkbox, then the compression fails for these particular tables. MSSCOMPRESS automatically repeats the failed compression without using the online option.
Generally speaking, the online option makes sense when compressing a few tables in dialog during normal working hours. It does not make sense when compressing many tables in a batch job during off-peak hours.

Starting the compression

Use the radio buttons to choose, what you want to compress. Per default, all selected tables are compressed. You may also choose the filtered tables (all tables displayed in the list) or all tables (of the ABAP schema).

 

 

 

When pressing the “Compress” button, a dialog box occurs. Here you can choose whether you want to start the compression run in dialog, start it immediately as a batch job, or schedule a batch job for off-peak hours.

 

 

 

 

 

When compressing in dialog, a progress indicator is displayed at the lower left corner of the window. Once the compression run has finished, the compression type and data/index size is refreshed in the list of tables. Afterwards the current filter is applied again. The selected tables are still selected, which makes it easier to see the result of the compression run. However, this may not be the case when using filters, for example the filter option “Row-compressed”. After page-compressing a row-compressed table, the filtered list does not contain the table anymore (because the filter criteria does not apply for this table anymore).

 

 

Once you have started a compression run as a batch job, you can see the job status in the main screen of MSSCOMPRESS. However, you have to update the job status manually by pressing the button on the right-hand side of the job status field.

 

 

 

 

 

 

When using a batch job, compression types and data/index size in the list of filtered tables will not be updated automatically. However, you can do this manually by pressing the “Refresh” button.

 

 

 

Checking the log files

MSSCOMPRESS creates a compression log for each table or index compression. It is not stored as a file. It uses the standard SAP Application Log. You can display the compression logs by pressing the “Log” button. Per default only the logs from the current day are displayed. However, you may change the from-date and to-date before starting the application log.

 

 

 

If the compression is running in an SAP batch job, a job log is also written. Press the “Job Log” button to get the standard job selection screen of SAP. Here you can check the status and job log of any SAP batch job.

 

 

 

 

 

 

 

 

 

You can read the job log while the job is still running. The job contains entries about the progress of the whole job (“… compressed 15% …”). The progress is calculated by the size of already compressed tables compared to the size of all tables, which are scheduled for compression. In order to save space in the database, MSSCOMPRESS always starts the compression with the smallest tables of the compression run. Therefore there seems to be only a very small progress, after the first tables have already been compressed.

 

 

 

 

 

 

 

 

Moving SQL Server Logins Between Servers

A very common challenge for a SQL Server DBA is moving SQL Server logins between servers. When you recreate a SQL Server log in (not a Windows log in), you get a new security ID (SID) by default, even though you have the same user name and password.
Issues arise with SIDs when you restore a database from another server. You can’t access the database. If you try to create the user entry in the database, you get an error message that says it already exists and fails. But if you try to list the users in the database, the list doesn’t show up.
Here is an example below. First create a database and a login and add the user to the database:

CREATE DATABASE LoginTest;
GO
CREATE LOGIN NicTest  WITH PASSWORD = ‘P@ssw0rd’;
GO
USE LoginTest;
GO
CREATE USER NicTest FROM LOGIN NicTest ;
GO
USE master;
GO

 

Next, detach the database, then drop and recreate the log in:

EXEC sp_detach_db ‘LoginTest’;
GO
DROP LOGIN NicTest ;
GO
CREATE LOGIN NicTest WITH PASSWORD = ‘P@ssw0rd’;
GO

If we reattach the database, we now have a new login with a SID that’s different from the one the user has in the database, even though the user has the same name. This is similar to what happen when you restore a database on another server and recreate the log in there:

CREATE DATABASE LoginTest
ON (FILENAME = ‘C:\SQLData\Data\LoginTest.mdf’)
FOR ATTACH;
GO

If we try to use the log in to access the database or create the log in, neither will work:

USE LoginTest;
GO
CREATE USER NicTest FOR LOGIN NicTest ;
GO
Msg 15023, Level 16, State 1, Line 1
User, group, or role ‘NicTest ‘ already exists in the
current database.

The standard resolution for this has been to use sp_change_users_login. It has an option to list any mismatched log ins and database users — those with the same names but different SIDs.

EXEC sp_change_users_login ‘Report’;
GO

UserName UserSID
———————————————-
NicTest 0x5D5F9089AFE1D4428106DE1B52BE0DFC

sp_change_users_login then offers an option to fix it . The way sp_change_users_login fixes this issue is to update the SID in the database user to match the log in:

EXEC sp_change_users_login ‘Update_One’, ‘NicTest ‘, ‘NicTest ‘;
GO

In Service Pack 2 of SQL Server 2005, new syntax was introduced to deal with this :

ALTER USER NicTest WITH LOGIN = NicTest ;
GO

There are potential issues with this since it temporarily fixes the problem or at worst, propagates it to other servers. It’s not the database SID that needs fixing; it’s the log in’s SID. If the log in’s SID were correct, there wouldn’t be a problem with copying the databases around. Here are couple common scenarios:

  1. A database is restored from another server (or a reinstalled server).
  2. The log ins that use the database need to be recreated.

A workaround for this problem is to specify the SID value when creating the log in in TSQL. It is an optional parameter. If you provide the same value as on the other server, you don’t have the problem. For example, instead of executing sp_change_users_login or ALTER USER as in the previous example, we could have done the following:

USE LoginTest;
GO
SELECT sid FROM sysusers WHERE name = ‘NicTest’;
GO

sid
———————————–
0x5D5F9089AFE1D4428106DE1B52BE0DFC

(1 row(s) affected)

What we could then have done was:

CREATE LOGIN NicTest WITH PASSWORD = ‘P@ssw0rd’,
SID = 0x5D5F9089AFE1D4428106DE1B52BE0DFC;
GO

The upside of this is that it’s a permanent fix. Next time you restore the database, you won’t have to fix it again.

 

How To: Execute all SQL files in a Directory with Powershell

Get-ChildItem -Path “C:\SQLScripts” -Filter “*.sql” | % {invoke-sqlcmd -InputFile $_.FullName}

or

foreach ($f in Get-ChildItem -path “C:\SQLScripts\” -Filter *.sql | sort-object)
{
$out = “C:\SQLScripts\OUTPUT\” + $f.name.split(“.”)[0] + “.txt” ;
invoke-sqlcmd –ServerInstance <> -Database <> -InputFile $f.fullname | format-table | out-file -filePath $out
}

 

SQL Server 2008 – Install Powershell

Download Microsoft SQL Server 2008 Feature Pack

http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8824

Download only this package for your platform:

Microsoft SQL Server 2008 Management Objects

Microsoft Windows PowerShell Extensions for SQL Server

 

And after execute Powershell with this command:

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”)

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SmoExtended”)

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.ConnectionInfo”) 

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SmoEnum”) 

SAP Tuning

Tuning SAP does always need a considerable range of observation before prescribing or implementing anything on the system. It needs a considerable amount of patience to listen to the existing tech team, users and over and above all some managers ( bunch of jokers – never do things right – habitually only manage the scenario). Besides one also need to listen what system monitoring transactions, system logs, debugging utilities etc are screaming…..

To start with we do need to come across a questionnaire, which must carry some of the following questions…..

  • When you are experiencing system is slow? during a transaction or during execution of a report or a transaction associated with a report for generating some online documents? – User
  • When you are experiencing a slow response? During using of SAP standard Tcodes and Programs or the customised (Z) Tcodes and programs? – User
  • What are the available downtime? – Service Level Management
  • What is your storage architecture? – Release Management…
  • Are you using some kind of middle-ware product, which provide only complied version of their library routines? – Release Management
  • What are the sampling rate of error messages? Do you have some ready made history documents of such ready in hand? – Basis and Functional consultants.
  • What is the standard network response between Application level and Presentation level? Basis administrators.
  • What are the known issues (pardon I am not using the many-referenced phrase known error) with the hardware you are using, with storage, with the OS, with the middle-ware, Oracle database version and obviously the release and versions of SAP itself you are using? – system administrator, storage administrator, database administrator, basis administrator and the fact sheets, notes, whitepapers from the respective vendors. Many already argued on this thought of mine, saying that this particular set of documents can bias the investigation and then the remedy but believe me it is always better to have these documents in hand. Otherwise the whole thing may end up in chasing wild goose. Remember whether OEM support is there or not may bother you heavily when you are going to implement some solution….oh yes even tuning solution…..
  • Do you have an ABAP program benchmarking existing for the customised ABAP programs? If yes then check the relevancy of the same with the data growth rate (yes this is very important) else you make your own. I have something of my own and will be sharing here.

These are some basic things one should gather to undertake a tuning job for SAP. Otherwise, one’s  may land up with some tuning suggestion which is impractical or not cost effective. In the whole series of these shareable document, I tried to restrict the suggestions which do not involve additional costing.

I already have expressed that performance of SAP system is depended on many things. Hardware, OS, storage box, storage agent, clustering agent, database, customized (Z) programs and overall the usage intelligence of the users. But as a basis tuning expert you might always fill the need of a mirror which will reflect at least the image how your system is being utilized and behaving. I found SAP transaction code ST04 alone is going to help you out on the same. It basically shows detail face of the database behave and usage, both history as well as online.

Question may arise saying that why DB? But DB performance will show you all e.g. excess I/O will also point to OS issue as well as improper load balancing as well as the use of a query or the query itself.

So what are the areas we must look into the ST04? I am writing here the use of the same where ORACLE is used as the database of SAP.

In the first screen only it shows you a lot of statistics and indications, but my interest is on looking at the data buffer cache size and quality. Quality must be above 97%, and then apparently there is no major problem. If less than 95 % then it is problematic. But even a improperly tuned recursive query some time false take your quality value higher than 97%. So there is always a deeper investigation is required.

My second favorite look is the sort sections. It should be less than 0.1% of total sorts.
Another Important area is to look in the shared pool statistics. DD (data dictionary) cache quality should be more than 99%, similarly the SQL area get ratio, but if the database is oracle 10.2.0.2 it may show you some abnormally low figure. This is due to Oracle bug 5452234 and if you need to rectify then need to install the patch set 10.2.0.4.

There is also another interesting data, which is instance performance. There are some interesting statistics which ensures you the parsing status in the system. They are Soft parse ratio max value is 1 which is not possible because at least once the SQL is hard parsed and then soft parsed in its next executions. But this must be as close as possible to 1 for a healthy system. Similarly there is another fact which is in-memory sort ratio; this is for a healthy system should have higher values. In fact the less the disk sorts the better.

There are a lot of magic in the detail screen, which attracts me in this transaction code. One of the most interesting options which attract me more is the…This shows an excerpt of the shared cache. Just run the report keeping everything blank. Normally this gives you a list of SQLs hitting database sorted descending values of disk reads. Yes highly important. The more use of disk, the more slow the operation. But a very large database system with huge growth rate and lot of customized program can show you some startling figure here. Not all those programs and SQL statements in the top 10 rows…As a next step from the same list I will just sort the list in descending order based on the column which shows buffer gets/row. And will note those programs and SQL statements. A combination of these two will mainly point out which programs/SQLs are main causes slowing down the performances.

ST04 transaction has another beautiful magic. Click on any of such identified SQL statements, it provides you the plan of execution and the line of the program from where such SQL …

Today upto this much is enough…….I am in favour providing some screenshot in this article to give a better illustration. Let see whether that is possible…..I am running this blogsite free..so limited space..

Go to the detail screen of ST04.

This has some exciting buttons – namely – Oracle Session, SQL request, Table access etc….These will actually lead you to see the facts inside SAP database.. What is happening? Who is killing your SAP? Unless there is a bug in the system, or some user is selecting parameters abnormally in the any report executing transaction, normally SAP system does not misbehave or slowdown. At least in that case you can directly refer to SAP support and get it done. Eliminating these two rest is the customised programs, for which SAP support can be accessed on charge basis.

Remember here you are getting free guidelines on SAP customized program tuning ;)

Anyway, jokes apart,  out of all those buttons mentioned above, Oracle session is for something what helps to see the actual session which is being processed whereas SQL request is showing current as well as history of the SQL statements. This SQL statements are comming out as a result of OpenSQL statements used in ABAP.

Clicking on “SQL Request” button will show you another screen with parameters, keep everything empty there and execute. Roughly it will bring out most of the SQL statements with their calling program name (remember Program Name not the SAP Transaction ). The straight method is to

  1. Search for those lines which have maximum Disk Read and maximum buffer fetch per record.
  2. Now click on each of those line will show you the resource cost of the SQL statement.
  3. Also you can go the line of the abap code from the statement is generated.
  4. Show the same to the ABAPER and ask to tune. You may also look into the scenario of possibility of creating index or wrong/incomplete joins and dig further.
  5. If you are preparing a report and handover all the problematic situations together then also check out / include those programs which are using too much buffer but fetching relatively low number of records…These programs slow poisoning your system, they are not only running inefficiently but also not allowing other programs to use the resources required.

Some of you may raise your eyebrows and say I am not using Top Down Attitude of the tuning…But I storngly belive that if some thing can be tuned and built better even in incorrect global / system wide parameters, that will run more better in case of those global / system wide parameters are corrected.

This transaction also have a good button which allows different dynamic performance views (V$) which are a very good resource for looking into more deep..Memory analyzing etc…..

This is a beautiful transaction where SAP actually is producing the image of its architecture. Everyday I look back into it with a hope of learning something more of the inner details of SAP and I never returned empty handed. Besides showing the current performance scenario, it also shows the how effectively the system landscape is designed. How effectively the design is synchronized with SAP architecture, how effectively the load balancing is done?

A Screenshot1, 2 and 3 is a part of initial display of a ST02 transaction. As transaction ST02 shows you the current snapshot scenario of buffer, SAP memory, and the call statistics in the initial screen, and believe me it gives enough information in this screen itself. Once you glance thru, lot of questions will come to your mind like the buffering techniques, overflows, swapping etc.

Let us take an example to narrate the scenario in detail. You are in a need of specific brand of pencil and went to the shop. Guess what can happen to you. Broadly either of the two, the branded pencil is available of the shelf or not. In later case the shopkeeper will politely ask you to order and wait till he get your requirement either from the store or from the manufacturer, – Huh…Look into the word wait, it represents delay or slowness of the process. So the basic ideology is clear, to get your things done properly, you must have right things in right amt and quality of the shelf, i.e. the BUFFER. The primary aim of a SAP tuner must be looking into this basic frame of mindset i.e.

  • Whether Buffer Configuration is correct?
  • Who/what is behind nonsense use of buffer?
  • What are the possibilities to rectify (technical job) and control (service management processes) the situation?

(Please note that the incorrect configuration or usage of buffer alone can make server CPU utilization, DISK I/O and Network I/O explode and it will be a nightmare for the basis team as well as their manager).

 

SAP maintains its buffer as an additional layer over to the database buffer layer. So something what is not available in SAP buffer, required to fetch from the database buffer layer which once again may required to be fetched from the physical database files. As it is evident in the Screenshot1, that initial record for buffer is performing better than the other parts and has less database access. If your application server and database server is in different hardware (normal trend in 3 tier architecture) you always add an extra process of network i/o between application and database and response time increases.

This is just a preface on the importance of ST02 transaction in SAP. Lot of other questions will play into mind about the architectural summary and the SAP parameters (this is a normal SAP Basis consultant approach – to learn by heart TCODE and parameters) required to be tuned etc. I am preparing my understanding on the same and will do it on next phase.. hope by another couple of days I will be able to do it.

Once again I remind and request, please express your view if you fill I am wrong somewhere, this will help increase my knowledge too.

 

Srl

 

Screen Shots

1

 

 

 

 

2
3

As I have pointed out earlier the three main points we need to concentrate when dealing with SAP buffer. I am going to discuss one by one of them here.

 

The first point was whether buffer configuration is correct?

To derive the answer on a TOP level is very easy…use transaction ST02 and then if you see any RED highlighted background figures simply say there is some problem and start investigating on that. So which areas one need to concentrate more for? Need to check where swaps are happening? Is it on the SAP Buffer, Memory or in Call statistics?

 

The thumb rule for call statistics is very easy; the hit ratio must be above 95% for “Select” and “Select Single”.

 

The thumb rule for Buffer and Memory is if you see a red highlight you increase it using transaction for viewing and changing profiles (RZ10 and/or RZ11). In fact in most of the cases on the web sites discussing about the ways of mitigating these red highlighted areas in ST02 suggest that. But this is not the right approach. One should go thru the history of such buffers for analyzing the situation. Always remember that an ill planned/calculated increase in the
SAP memory limit and further allocation could stimulate the paging activity and can be fatal in turn. This is especially true in case your central instance and database remain in the same physical server, and remember that database paging is dangerous as far as SAP performance is concerned.

 

This also needs a simultaneous observation data on the disk I/O, memory I/O and paging activity from the OS side. Requirement is there also to analyze whether your transport schedule are correct or not? Transport activity triggers flushing of SAP buffer. Hence it is always advisable to do transport in the least loaded segment of a day. If you have three-tier architecture with application server a careful observation of SAR reports and trend analysis will show you the same.

 

Check also the table buffers and the content of the buffers. How many Z table your programmer kept in the buffer? Are they really required? Have you benchmarked the performance of the associated programs before you have buffered the said table?

Some important parameters are listed below as far as changing the parameter is concerned. Use transaction RZ10 and RZ11 for manipulating…Feel this is enough for closing the ST02 topic. Rest everything is depended on the SAP and the Basis Consultaunts intelligence.

 

  1. For Table Buffer or TABL
    1. zcsa/table_buffer_area – for size of table buffer data area.
    2. zcsa/db_max_buftab – for directory entries – one for every resident table
  2. For Single key table Buffer or TABLEP.
    1. rtbb/max_tables – Directory Entries – One for each table.
    2. rtbb/buffer_length – Size of data area
  3. Program buffer
    1. abap/buffersize – Only parameter. No of directory entries are calculated automatically.
  4. For Screen Buffer or PRES.
    1. zcsa/bufdir_entries – Directory size – One per screen.
    2. zcsa/presentation_buffer_area – Total screen buffer size in KB.
  5. CUA buffers
    1. rsdb/cua/buffersize – total Buffer in KB and no of directories are caluated by dividing the same with 2K.
  6. Role and paging buffer.
    1. rdisp/ROLL_SHM – For role buffer
    2. rdisp/PG_SHM – For paging buffer
  7. Calendar Buffer
    • zcsa/calendar_area

I want to start this with a scorecard. This score card will tell you when and where to look for tuning activities for your SAp servers and uses different values you actually see using transaction ST03.

This is a very handy tip , when we are looking into ST03 transaction. Let us see the ST03 transaction and the factors mentioned above in the variable1 and variable2 and some definitions of them. You can always find the relevant information in http://help.sap.com but this is a ready made chart.There are a lot of inside happenings in a SAP system. A small configuration mistake can slow poison your entire system.CPU time is actually the total time used by the application server CPU for loading, generating of ABAP source codes and processing screen from the database information, creati

I wanted to start writing this page for a ready reckoner, when users are reporting slowness in the system. so at the first level, I tried an impossible i.e. creating a summary line for SAP memory structure and usage which is actually well elaborated and written in different books, SAP online help etc in a single drawing.

 

So the above drawing is something which I can always keep on my desk for a ready reference. Apart from this screen I will also like one to read the SAP Note 103747.